Okay, that may be so—but as I indicated by my response, I do.
I assert that most if not all of the people on the ‘other side’ of the metaethics argument you are participating in believe that morality exists, but as a sociological phenomenon rather than a metaphysical one (as you seem also to believe), and furthermore that morality not being metaphysical “adds up to normality,” in the local parlance—it’s still reasonable to disapprove of murder, etc.
sociological phenomenon … still reasonable to disapprove of murder, etc.
Yup.
Could an agent with different preferences from ours reasonably approve of murder?
Yes to that too.
I very, very, strongly disapprove of terrorism. Terrorists, of course, would disagree. There is no objective sense in which one of us can be “right”, unless you go out of your way to specifically define “right” as those actions which agree with one side or the other. The privileging of those actions as “right” still originates from the subjective values of whatever agent is judging.
Thanks, CuSithBell, I think you’ve done a good job of making the issue plain. It does indeed all add up to normality.
I very, very, strongly disapprove of terrorism. Terrorists, of course, would disagree. There is no objective sense in which one of us can be “right”, unless you go out of your way to specifically define “right” as those actions which agree with one side or the other.
There is a way in which someone can be wrong. If someone holds to a set
of values that contains contradictions , they cannot claim to be right. Moral arguments in fact do often make appeals to consistency—“if you support equal rights for women, you should support equal rights for gays”
Our culture certainly does like to slap around those whose arguments are inconsistent… to the point that I suspect more consistent moral codes are consistent because the arguer is striving for consistency over truth than because they’ve discovered moral truths that happen to be consistent. We may have reached the point where consistent moral codes deserve more skepticism than inconsistent ones.
You’re a theist. You believe in a grand cosmic judge built into the universe which approves or disapproves of human action. We do not. Deal with it.
I don’t know whether Eugine is a theist. I would have gone with “fundamentalist humanist”. There are certainly similarities between Eugine’s position and typical theists and the ‘built into the universe’ part seems about right.
I don’t know whether Eugine is a theist. I would have gone with “fundamentalist humanist”.
I was doing some thinking after the thread earlier about whether or not one can be a moral realist without in some sense being a theist. I tried as hard as I could to phrase Peter and Eugine’s position non-theistically, with some variant of:
“There exists a set of preferences which all intelligent agents… ”
And I tried finishing the sentence with “are compelled by some force to adopt”. But that obviously isn’t true as there are extreme differences in preference among agents. I tried finishing the sentence with “should adopt”, but of course the word should contains the entire confusion all over again. Should adopt according to whom? , I was forced to ask.
The moral realist position is nonsensical without an intelligence against whose preferences you can check.
There exists a set of maxims which all intelligent and social agents find it in their long-term interest to adhere to, even if adherence is not always in the agent’s short-term interest. These maxims are called a “moral code”, particularly when the long-term incentive to adhere to the code arises by way of the approval or disapproval of other agents who themselves adhere to the code.
This view of morality is a ‘moral realist’ rather than ‘moral conventionalist’ position when it is claimed that there is a theoretically best moral code, and that it may be morally permissible to flout the locally conventional moral code if that code is different from the best code.
I think this provides a variant of moral realism without invoking a Supreme Lawgiver. Kant’s categorical imperative was another (seriously flawed, IMO) attempt to derive some moral rules using no other assumptions beyond human rationality and social lifestyle. The moral realist position may well be incorrect, but it is not a nonsensical position for an atheist to take.
The position you describe is sensical, but it’s not what people (at least on LW) who think “moral realism” is nonsensical mean by “morality”. You’re not saying anything about ultimate ends (which I’m pretty sure is what NMJablonski, e.g., means by “preferences”); the version of “moral realism” that gets rejected is about certain terminal values being spookily metaphysically privileged.
Actually, I am saying something about ultimate ends, at least indirectly. My position only makes sense if long-term ultimate ends become somehow ‘spookily metaphysically privileged’ over short-term ultimate ends.
My position still contains ‘spookiness’ but it is perhaps a less arbitrary kind of ‘magic’ - I’m talking about time-scales rather than laws inscribed on tablets.
Well, I am attempting to claim here that there exists an objective moral code (moral realism) which applies to all agents—both those who care about the long term and those who don’t. Agents who mostly care about the short term will probably be more ethically challenged than agents who easily and naturally defer their gratification. But, in this thread at least, I’m arguing that both short-sighted and long-sighted agents confront the same objective moral code. So, I apparently need to appeal to some irreducible spookiness to justify that long-term bias.
One possible attack: take the moral code and add “Notwithstanding all this, kill the humans.” to the end. This should be superior for all the remaining agents, since humans won’t be using up any resources after that’s accomplished (assuming we can’t put up enough of a fight).
A practical vulnerability to (perhaps unconsciously biased) self-interested gamers: untestable claims that although the moral code is at a local optimum, everyone needs to switch to a far away alternative, and then, after the new equilibrium, things will really be better. Sci-fi rejoinder: this explains why there are so many simulations :)
Yes, in a society with both human and non-human agents, if the humans contribute nothing at all to the non-humans, and only consume resources that might have been otherwise used, then the non-humans will judge the humans to be worthless. Worse than worthless. A drain to be eliminated.
But there is nothing special about my version of ethics in this regard. It is a problem that must be faced in any system of ethics. It is the FAI problem. Eliezer’s solution is apparently to tell the non-human agents as they are created “Humans are to be valued. And don’t you forget it when you self-modify.” I think that a better approach is to make sure that the humans actually contribute something tangible to the well-being of the non-humans.
Perhaps neither approach is totally safe in the long run.
Interestingly, it seems to me that this definition would produce meta-maxims more readily than normal ones—“find out what your local society’s norms are and stick to those” rather than “don’t eat babies”, for example.
Eliezer’s Babyeaters and Superhappies both seemed to follow that meta-rule. (It was less obvious for the latter, but the fact that they had complex societies with different people holding different set roles strongly implies it.)
Interestingly, it seems to me that this definition would produce meta-maxims more readily than normal ones—“find out what your local society’s norms are and stick to those” rather than “don’t eat babies”, for example.
Yes, it does produce meta-maxims in some sense. But meta-maxims can provide useful guidance. “Do unto others …” is a meta-maxim. So is “Don’t steal (anything from anybody)” or “Don’t lie (about anything to anyone)”
As to whether this definition leads to the meta-maxim “find out what your local society’s norms are and stick to those”, that is explicitly not a consequence of this definition. Instead, the meta-maxim would be “work out what your local society’s norms should be and set an example by doing that”.
Thanks for this. Your first paragraph looks very clean to me—not tendentious and even almost falsifiable. But can you tell me how I should understand “best” and “morally permissible” in the second paragraph? (Morally permissible according to the best maxims only, or morally permissible according to all maxims indicated in the first paragraph?)
‘Best’ here means most advantageous to the agents involved—both the focal agent, and the other agents who join in the convention. What I have in mind here is something like the Nash Bargaining Solution, which is both unique and (in some sense) optimal.
“Morally permissible” by the best code, which is also the code mentioned in the first paragraph. What I am claiming is that the ‘moral should’ can be reduced to the ‘practical self-interested should’ if (and it is a big if) it is reasonable to treat the other agents as rational and well-informed.
The “intelligent” qualification could mean just about anything, and sort functions as an applause light on LW, and the “social” part sounds kind of collectivist. Besides that, this is just called having a low time preference, and is a common self-help concept. I don’t see any need to label this morality.
You’re right, I did stop reading carefully toward the end because the Kantian stuff looked like it was just a side note. I’ve now read it carefully, but I still don’t see the significance.
There exists a set of maxims which all intelligent and social agents find it in their long-term interest
I can see how with that definition of morality it could be sensibly theorized as objective. I don’t think that sentence is true, as there are many people (e.g. suicide bombers) whose evaluations of their long-term interest are significant outliers from other agents.
I don’t think that sentence is true, as there are many people (e.g. suicide bombers) whose evaluations of their long-term interest are significant outliers from other agents.
That’s right, but this exception (people whose interests are served by violating the moral norm) itself has a large exception, which is that throughout most of the suicide bomber’s life, he (rightly) respects the moral norm. Bad people can’t be bad every second of their lives—they have to behave themselves the vast majority of the time if for no other reason than to survive until the next opportunity to be bad. The suicide bomber has no interest in surviving once he presses the button, but for every second of his life prior to that, he has an interest in surviving.
And the would-be eventual suicide bomber also, through most of his life, has no choice but to enforce moral behavior in others if he wants to make it to his self-chosen appointment with death.
If we try to imagine someone who never respects recognizable norms—well, it’s hard to imagine, but for one thing they would probably make most “criminally insane” look perfectly normal and safe to be around by contrast.
Upvoted. The events you describe makes sense and your reasoning seems valid. Do you think, based upon any of our discussion, that we disagree on the substance of the issue in any way?
That one agent’s preferences differ greatly from the norm does not automatically make cooperation impossible. In a non-zero-sum game of perfect information, there is always a gain to be made by cooperating. Furthermore, it is usually possible to restructure the game so that it is no longer zero-sum.
For example, a society confronting a would-be suicide bomber will (morally and practically) incarcerate him, if it has the information and the power to do so. And, once thwarted from his primary goal, the would-be bomber may find that he now has some common interests with his captors. The game is no longer zero-sum.
So I don’t think that divergent interests are a fatal objection to my scheme. What may be fatal is that real-world games are not typically games with perfect information. Sometimes, in the real world, it is advantageous to lie about your capabilities, values, and intentions. At least advantageous in the short term. Maybe not in the long term.
That is a zero-sum game. (Linear transformations of the payoff matrices don’t change the game.)
It is also a game with only one player. Not really a game at all.
ETA: If you want to allow ‘games’ where only one ‘agent’ can act, then you can probably construct a non-zero-sum example by offering the active player three choices (A, B, and C). If the active player prefers A to B and B to C, and the passive player prefers B to C and C to A, then the game is non-zero-sum since they both prefer B to C.
I suppose there are cases like this in which what I would call the ‘cooperative’ solution can be reached without any cooperation—it is simply the dominant strategy for each active player. (A in the example above). But excluding that possibility, I don’t believe there are counterexamples.
Rather than telling me how my counterexample violates the spirit of what you meant, can you say what you mean more precisely? What you’re saying in 1. and 2. are literally false, even if I kind of (only kind of) see what you’re getting at.
When I make it precise, it is a tautology. Define a “strictly competitive game” as one in which all ‘pure outcomes’ (i.e. results of pure strategies by all players) are Pareto optimal. Then, in any game which is not ‘strictly competitive’, cooperation can result in an outcome that is Pareto optimal—i.e. better for both players than any outcome that can be achieved without cooperation.
The “counter-example” you supplied is ‘strictly competitive’. Some game theory authors take ‘strictly competitive’ to be synonymous with ‘zero sum’. Some, I now learn, do not.
That one agent’s preferences differ greatly from the norm does not automatically make cooperation impossible.
I wasn’t arguing that cooperation is impossible. From everything you said there it looks like your understanding of morality is similar to mine:
Agents each judging possible outcomes based upon subjective values and taking actions to try to maximize those values, where the ideal strategy can vary between cooperation, competition, etc.
This makes sense I think when you say:
For example, a society confronting a would-be suicide bomber will (morally and practically) incarcerate him
The members of that society do that because they prefer the outcome in which he does not suicide attack them, to one where he does.
once thwarted from his primary goal, the would-be bomber may find that he now has some common interests with his captors
This phrasing seems exactly right to me. The would-be bomber may elect to cooperate, but only if he feels that his long-term values are best fulfilled in that manor. It is also possible that the bomber will resent his captivity, and if released will try again to attack.
If his utility function assigns (carry out martyrdom operation against the great enemy) an astronomically higher value than his own survival or material comfort, it may be impossible for society to contrive circumstances in which he would agree to long term cooperation.
This sort of morality, where agents negotiate their actions based upon their self-interest and the impact of others actions, until they reach an equilibrium, makes perfect sense to me.
The moral realist position is nonsensical without an intelligence against whose preferences you can check.
This seems as well-motivated as the position that nothing can exist without someone to create it. That is, it seems intuitively true to a human since we privilege agency, but I don’t see any contradiction, logical or otherwise, in having moral facts be real.
This question might reduce to the question of whether mathematical facts are “real”, which might not make any more sense. Is there a sense in which there are “two” rocks here, even if there were no agent to count the rocks? Is there a sense in which murder is wrong, even if there was never anyone to murder or observe murder?
I think the only difficulties here are definitional (suggested by the word “sense” above) and the proper thing to do with a definitional dispute is to dissolve it. Most moral realists hereabouts are some sort of relativists (that is, we take it to be a “miracle” that we care about what’s right rather than something else, and otherwise would have taken it to be a “miracle” that we care about something else instead of what’s right, but that doesn’t change what’s right).
Is there a sense in which there are “two” rocks here, even if there were no agent to count the rocks? Is there a sense in which murder is wrong, even if there was never anyone to murder or observe murder?
I can understand what physical conditions you are describing when you say “two rocks”. What does it mean, in a concrete and substantive sense, for murder to be “wrong”?
I can understand what physical conditions you are describing when you say “two rocks”. What does it mean, in a concrete and substantive sense, for murder to be “wrong”?
I can give you two answers to this, one which maps better to this community and one which fits better with the virtue ethics tradition.
There exists (in the sense that mathematical functions exist) a utility function labeled ‘morality’ in which actions labeled ‘murder’ bring the universe into a state of lower-utility. I make no particular claims about the proper way to choose such a utility function, just that there is one that is properly called ‘morality’, and moral disputes can be characterized as either disputes over which function to call ‘morality’ or disputes over what the output of that function would be given certain inputs.
‘Good’ and ‘bad’ are always evaluated in terms of effects upon a particular thing; a good hammer is one which optimally pounds in nails, a good horse is fast and strong, and a good human experiences eudaimonia. Murder is the sort of thing that makes one a bad human; it makes one less virtuous and thus less able to experience eudaimonia.
It could be the case that the terms ‘good’, ‘bad’, and ‘eudaimonia’ should be evaluated based on the preferences of an agent. But in that case it that does not make it any less the case that moral facts are facts about the world that one could be wrong about. For instance, if I prefer to live, I should not drink drain cleaner. If I thought it was good to drink drain cleaner, I would be wrong according to my own preferences, and an outside agent with different preferences could tell me I was objectively wrong about what’s right for me to do.
As a side note, ‘murder’ is normative; it is tautologically wrong. Denying wrongness in general denies the existence of murder. It might be better to ask, “What does it mean for a particular sort of killing to be ‘wrong’?”, or else “What does it mean for a killing to be murder?”
There is an objective sense in which actions have consequences. I am always surprised when people seem to think I’m denying this. Science works, there is a concrete and objective reality, and we can with varying degrees of accuracy predict outcomes with empirical study. Zero disagreement from me on that point.
So, we judge consequences of actions with our preferences. One can be empirically incorrect about what consequences an action can have, and if you choose to define “wrong” as those actions which reduce the utility of whatever function you happen to care about, then sure, we can determine that objectively too. All I am saying is that there is no objective method for selecting the function to use, and it seems like we’re in agreement on that.
Namely, we privilege utility functions which value human life only because of facts about our brains, as shaped by our genetics, evolution, and experiences. If an alien came along and saw humans as a pest to be eradicated, we could say:
“Exterminating us is wrong!”
… and the alien could say:
“LOL. No, silly humans. Exterminating you is right!”
And there is no sense in which either party has an objective “rightness” that the other lacks. They are each referring to the utility functions they care about.
“Exterminating us is wrong!”
… and the alien could say:
“LOL. No, silly humans. Exterminating you is right!”
And there is no sense in which either party has an objective “rightness” that the other lacks. They are each referring to the utility functions they care about.
Note that the definitional dispute rears its head in the case where the humans say, “Exterminating us is morally wrong!” in which case strong moral relativists insist the aliens should respond, “No, exterminating you is morally right!”, while moral realists insist the aliens should respond “We don’t care that it’s morally wrong—it’s shmorally right!”
There is also a breed of moral realist who insists that the aliens would have somehow also evolved to care about morality, as the Kantians who believe morality follows necessarily from basic reason. I think the burden of proof still falls on them for that, but unfortunately there aren’t many smart aliens to test.
That doesn’t seem relevant. I was noting cases of what the aliens should say based on what they apparently wanted to communicate. I was thus assuming they were speaking truthfully in each case.
In other words, in a world where strong moral relativism was true, it would be true that the aliens were doing something morally right by exterminating humans according to “their morality”. In a world where moral realism is true, it would be false that the aliens were doing something morally right by exterminating humans, though it might still be the case that they’re doing something ‘shmorally’ right, where morality is something we care about and ‘shmorality’ is something they care about.
And there is no sense in which either party has an objective “rightness” that the other > lacks. They are each referring to the utility functions they care about.
There is a sense in which one party is objectively wrong. The aliens do not want
to be exterminated so they should not exterminate.
So, we’re working with thomblake’s definition of “wrong” as those actions which reduce utility for whatever function an agent happens to care about. The aliens care about themselves not being exterminated, but may actually assign very high utility to humans being wiped out.
Perhaps we would be viewed as pests, like rats or pigeons. Just as humans can assign utility to exterminating rats, the aliens could do so for us.
Exterminating humans has the objectively determinable outcome of reducing the utility in your subjectively privileged function.
Inasmuch as we are talking about objective rightness we are talking are not talking about utility functions, because not everyone is running of the same utility function, and it makes sense to say some UFs are objectively wrong.
Let’s break this all the way down. Can you give me your thesis?
I mean, I see there is a claim here:
The aliens do not want to be exterminated so they should not exterminate.
… of the format (X therefore Y). I can understand what the (X) part of it means: aliens with a preference not to be destroyed. Now the (Y) part is a little murky. You’re saying that the truth of X implies that they “should not exterminate”. What does the word should mean there?
You’re signalling to me right now that you have no desire to have a productive conversation. I don’t know if you’re meaning to do that, but I’m not going to keep asking questions if it seems like you have no intent to answer them.
I.m busy, I’ve answered it several times before, and you can look it up yourself, eg;
“Now we can return to the “special something” that makes a maxim a moral maxim. For Kant it was the maxim’s universalizability. (Note that universalizability is a fundamentally different concept than universality, which refers to the fact that some thing or concept not only should be found everywhere but actually is. However, the two concepts sometimes flow into each other: human rights are said to be universal not in the sense that they are actually conceptualized and respected in all cultures but rather in the sense that reason requires that they should be. And this is a moral “should.”) However, in the course of developing this idea, Kant actually developed several formulations of the Categorical Imperative, all of which turn on the idea of universalizability. Commentators usually list the following five versions:
“Act only according to a maximum that at the same time you could will that it should become a universal law.” In other words, a moral maxim is one that any rationally consistent human being would want to adopt and have others adopt it. The above-mentioned maxim of lying when doing so is to one’s advantage fails this test, since if there were a rule that everyone should lie under such circumstances no one would believe them – which of course is utterly incoherent. Such a maximum destroys the very point of lying.
“Act as if the maxim directing your action should be converted, by your will, into a universal law of nature.” The first version showed that immoral maxims are logically incoherent. The phrase “as if” in this second formulation shows that they are also untenable on empirical grounds. Quite simply, no one would ever want to live in a world that was by its very nature populated only by people living according to immoral maxims.
“Act in a way that treats all humanity, yourself and all others, always as an end, and never simply as a means.” The point here is that to be moral a maxim must be oriented toward the preservation, protection and safeguarding of all human beings, simply because they are beings which are intrinsically valuable, that is to say ends in themselves. Of course much cooperative activity involves “using” others in the weak sense of getting help from them, but moral cooperation always includes the recognition that those who help us are also persons like ourselves and not mere tools to be used to further our own ends.
“Act in a way that your will can regard itself at the same time as making universal law through its maxim.” This version is much like the first one, but it adds the important link between morality and personal autonomy: when we act morally we are actually making the moral law that we follow.
“Act as if by means of your maxims, you were always acting as universal legislator, in a possible kingdom of ends.” Finally, the maxim must be acceptable as a norm or law in a possible kingdom of ends. This formulation brings together the ideas of legislative rationality, universalizability, and autonomy. ”
You mean, “The aliens do not want to be exterminated, so the aliens would prefer that the maxim ‘exterminate X’, when universally quantified over all X, not be universally adhered to.”?
Well… so what? I assume the aliens don’t care about universalisable rules, since they’re in the process of exterminating humanity, and I see no reason to care about such either. What makes this more ‘objective’ than, say, sorting pebbles into correct heaps?
‘Good’ and ‘bad’ are always evaluated in terms of effects upon a particular thing; a good hammer is one which optimally pounds in nails, a good horse is fast and strong, and a good human experiences eudaimonia. Murder is the sort of thing that makes one a bad human; it makes one less virtuous and thus less able to experience eudaimonia.
What is eudaimonia for...or does the buck stop there?
As a side note, ‘murder’ is normative; it is tautologically wrong.
And tautologies and other apriori procedures can deliver epistemic objectivity
without the need for the any appeal to quasi empiricsim.
What is eudaimonia for...or does the buck stop there?
It was originally defined as where the buck stops. To Aristotle, infinite chains of justification were obviously no good, so the ultimate good was simply that which all other goods were ultimately for.
Regardless of how well that notion stands up, there is a sense in which ‘being a good hammer’ is not for anything else, but the hammer itself is still for something and serves its purpose better when it’s good. Those things are usually unpacked nowadays from the perspective of some particular agent.
A good hammer is good for whatever hammers are for. You could hardly have a clearer example of an instrumental good. And your claim that all goods are for something is undermined by the way you are handling eudaimonia.
A good hammer is good for whatever hammers are for.
Yes, that’s what I said above:
the hammer itself is still for something and serves its purpose better when it’s good
your claim that all goods are for something is undermined by the way you are handling eudaimonia.
I don’t think it is. I did not say I agreed with Aristotle that this sort of infinite regress is bad. Eudaimonia is the good life for me. All other things that are good for me are good in that they are part of the good life. It is the case that I should do what is best for me. As a side effect, being a good human makes me good for all sorts of things that I don’t necessarily care about.
This probably reduces to a statement about my preferences / utility function, as long as those things are defined in the ‘extrapolated’ manner. That is, even if I thought it were the case that I should drink drain cleaner, and I then drank drain cleaner, it was still the case that I preferred not to drink drain cleaner and only did so because I was mistaken about a question of fact. This does not accord well with the usual English meaning of ‘preference’.
This is doable… Let d be the length of the diameter of some circle, and c be the circumference of the same circle. Then if you have an integer number (m) of sticks of length d in a straight line, and an integer number (n) of sticks of length c in a different straight line, then the two lines will be of different lengths, no matter how you choose your circle, or how you choose the two integers m and n.
In general, if the axioms that prove a theorem are demonstrable in a concrete and substantive way, then any theorems proved by them should be similarly demonstrable, by deconstructing it into its component axioms. But I could be missing something.
There are sets of axioms that aren’t really demonstrable in the physical universe, that mathematicians use, and there are different sets of axioms where different truths hold, ones that are not in line with the way the universe works. Non-euclidean geometry, for example, in which two parallel lines can cross. Any theorem is true only in terms of the axioms that prove it, and the only reason why we attribute certain axioms to this universe is because we can test them and the universe always works the way the axiom predicts.
For morality, you can determine right and wrong from a societal/cultural context, with a set of “axioms” for a given society. But I have no idea how you’d test the universe to see if those cultural “axioms” are “true”, like you can for mathematical ones. I don’t see any reason why the universe should have such axioms.
It is surprising to find someone on a site dedicated to the Art of Rationality who cannot imagine impersonal norms, since rationality is a set of impersonal norms. You are not compelled to be rational, to be moral, or to play chess. But if you are being rational, you should avoid contradictions—that is a norm. If you are being moral, you should avoid doing unto others what you would not wish done unto you. If you are playing chess, you should avoid placing the bishop on a square that cannot be reached diagonally from the current one. No-one makes you follow those rules, but there is a logical relationship between following the rules and playing the game: you cannot break the rules and still play the game. In that sense, you must follow
the rules to stay in the game. But that is not an edict coming from a person or Person.
ETA:
You may well need agents to have values. I don’t require morality to be free of values.
To expound on JoshuaZ’s point, would chess have rules even if there were no minds?
Are the rules of chess objective and independent of anyone actually, y’know, knowing them?
ETA: Furthermore, if two agents in a place where chess has never existed come across a chess set, and they have a disagreement about what the rules might be, is one of them right and the other wrong?
There’s exactly one function which is objectively the function which returns 1 for just those moves where a pawn moves one space, or two spaces on the first move, or the bishop moves diagonally, or the king moves one space, where none of the moves intersect other pieces without capturing, etc...
As I said, not every mind will care to evaluate this function.
As for whether mathematical objects exist… is this important? It really adds up to the same thing, either way.
There’s exactly one function which is objectively the function which returns 1 for just those moves where a pawn moves one space, or two spaces on the first move, or the bishop moves diagonally, or the king moves one space, where none of the moves intersect other pieces without capturing, etc...
Yes, of course. I wasn’t arguing anything else. The person I was contending with is a moral realist, who would say that the function which represents those rules, the rules under which we play chess now, is the correct set of rules, and that this correctness is objective and independent of the minds of chess players.
This person would, I presume, argue that if suddenly every chess player in the world at the same time agreed to eliminate en passant from the game of chess, that they would then be playing the game “wrong”.
That is the position which I find nonsensical. I’m not arguing for anything bizarre here. I’m a Bayesian rationalist and a reductionist and yes I have read the sequences.
The person I was contending with is a moral realist, who would say that the function which represents those rules, the rules under which we play chess now, is the correct set of rules, and that this correctness is objective and independent of the minds of chess players.
I explained the point I was making and that wasn’t it: The point was what obligation/compulsion means. It doesn’t mean it is physically impossible to
do what is morally forbidden. It doesn’t mean it is an edict you will be punished for
disobeying. It does mean that it is logically impossible to be moral (or rational or a chess player) after having significantly departed from the rules.
This person would, I presume, argue that if suddenly every chess player in the world at the same time agreed to eliminate en passant from the game of chess, that they would then be playing the game “wrong”.
They would be playing a different game..chess 2.0 or chess++. Plainly,
you can’t have one player using the revised rules and her opponent the old ones.
With your chess analogy, those moves that are forbidden are set by human minds and decisions. The game of chess itself is a product of human intelligence, and they can change the rules over time, and indeed they have.
Are you saying that morality works the same way? That what is morally forbidden are those things which most people find objectionable / assign negative value to?
They would be playing a different game..chess 2.0 or chess++. Plainly, you can’t have one player using the revised rules and her opponent the old ones.
Dude, you just said a minute ago that the word “chess” could be a family of different but related rulesets when I asked about castling, but now when it comes to changing en passant the game becomes something else entirely? I think you should respond to my question on that thread about a precise explanation of what you mean by “chess”, as I cannot figure out why some things count and others do not.
If you vary the way games work too much, you end with useless non-games (winning is undefinable, one player always wins...)
If you vary the way rationality works too much, you end up with paradox, quodlibet etc.
If you vary the rules of meta ethics too much, you end up with anyone being allowed to do anything, or nobody being allowed to do anything.
“The rules are made up” doesn’t mean the rules are abitrary.
There is a family of chess-type games, and they are different games, because they
are not intersubstitutable.
One could identify many different sets of rules for chess mathematically, but is one of them objectively the “correct” set of rules?
I find that perplexing. Perhaps you mean many sets of rules could be used to play games with chess boards and pieces. But they are not all chess. Chess is its rules.
Same rules+differrent pieces=same game. Different rules+same pieces=different game.
Chess is its rules. Same rules+differrent pieces=same game. Different rules+same pieces=different game.
This isn’t strictly speaking true. Note that there have many different games called chess. For example, pawns being able to move 2 squares on their first move, en passant, castling, and the queen being able to move as she can, are all recent innovations. But let’s put that aside and explore your analogy. If there’s one thing called “morality” then I fail to see how that isn’t but one game among many. You seem to be treating morality like chess (in that there’s an objective thing that is or is not chess) but are then bringing along for the particular game called “morality” all sorts of assumptions about whether or not people should play it or expect to play it. This seems akin to asserting that because there’s only one objective game called “chess” that entities “should” play it.
In Italy one can still find older chess players who use an alternative castling rule, from when castling was first being introduced, called “free castling” in which the rook can take any of the squares between itself and the king, or the king’s position, rather than the single permitted position (depending on the side) of the more common castling rules we play with today.
Is one of these versions the “correct” way to play chess? Or does it depend entirely on the subjective viewpoint of the chess players?
Which way is the correct way to play “chess” depends on which definition of the word chess you are using. In general, we resolve ambiguities like that by looking at the speaker’s intent. (The speaker does not have to be one of the players.)
Which way is the correct way to play “chess” depends on which definition of the word chess you are using. In general, we resolve ambiguities like that by looking at the speaker’s intent. (The speaker does not have to be one of the players.)
Yes, I know that. I’m asking rhetorical questions to Peter who is a moral realist.
Alright, this is your analogy, and instead of dancing around and arguing definitions can you explain, in precise terms, what you mean when you say chess?
I’ll take the bait: A place where the idea of chess had never been thought of couldn’t, by definition, contain a chess set. It could contain an artifact physically identical to one and which we could call one if we weren’t being precise, but it would have to have come from some origin besides humans intending to build a chess set. Thus would be perfectly reasonable for the two to speculate (and be right or wrong) about what (if any) use it was meant for by its actual builders (if any).
It could contain an artifact physically identical to one and which we could call one if we weren’t being precise, but it would have to have come from some origin besides humans intending to build a chess set.
Do you seriously mean to imply that something identical to chess set is not a chess set? The words “chess set” as I used them above are meant only to connect to a physical object, not intentions.
Thus would be perfectly reasonable for the two to speculate (and be right or wrong) about what (if any) use it was meant for by its actual builders (if any).
In practical terms I agree completely. My argument with Peter wasn’t actually about chess though, so it doesn’t make a ton of sense when you focus on particulars of the analogy, especially an analogy so flawed as this one.
Do you think we disagree on any issue of substance? If so, where?
I’m familiar with it anyway. The point is that things like history, prvenance and cultural
significance are built into the way we think about things, part of the connotational cloud. That doesn;t contradict QM, but it does schemes to lossllessly reduce meaning to physics.
I’m sorry Peter, but I do not subscribe to your flights of non-empirical philosophy. We have been around this over and over. I could sit here and explain to you how the chess analogy fails, how the rationality analogy fails, et cetera and so on.
You embody Hollywood rationality. Your conception of belief and thought is entirely verbal and anthropocentric. You focus too hard on philosophy and not enough on physics. As a result, you can be seen almost palpably grasping at straws.
I avoid doing unto others what I would not wish done unto me because that policy, when shared by social animals like humans, leads to results I prefer. I have evolved that preference. I voluntarily cooperate because it is in my direct interest, i.e. fulfills my values and preferences.
I could sit here and explain to you how the chess analogy fails, how the rationality analogy fails, et cetera and so on.
I would find it easier to believe you could if you had.
I avoid doing unto others what I would not wish done unto me because that policy, when shared by social animals like humans, leads to results I prefer. I have evolved that preference. I voluntarily cooperate because it is in my direct interest, i.e. fulfills my values and preferences.
I don’t deny that you can reason about preferences. All I’m saying is that to make a decision about whether to keep, discard, or modify your own preferences, the only metrics you have to check against are your own existing values and preferences. There are no moral facts out there in the universe to check against.
Do you disagree?
I would find it easier to believe you could if you had.
It turns out that I couldn’t walk away so easily, and so I, and several others, have.
Next I’ll be saying that mathematicians can come up with objectively true theorems without checking them against Paul Erdos’s Book..
Values and preferences can be re-evaluated according to norms of rationality, such as consistency. We generally deem the outputs of reasoning processes to be objective,
even absent the existence of a domain of things to be checked against.
Next I’ll be saying that mathematicians can come up with objectively true theorems without checking them against Paul Erdos’s Book..
First, that book only has the elegant proofs. Second this totally misses the point: whether a statement is a theorem of a given formal system is objectively true or false is a distinct claim from the claim that some set of axioms is objectively a set of axioms that is somehow worth paying attention to. Even if two mathematicians disagree about whether or not one should include the Axiom of Choice in set theory, they’ll both agree that doing so is equivalent to including Zorn’s Lemma.
You aren’t just claiming that there are “theorems” from some set of moral axioms, but seem to be claiming that some sets of axioms are intrinsically better than others. You keep making this sort of leap and keep giving it no substantial justification other than the apparent reasoning that you want to be able to say things like “Gandhi was good” or “genocide is bad” and feel that there’s objective weight behind it. And we all empathize with that desire, but that doesn’t make those statements more valid in any objective sense.
I haven’t said anything is intriniscally better: I have argued that the choice of basic principles in maths, morlaity, etc is constrained by what we expect to be able to do with those things.
If you vary the way games work too much, you end with useless non-games (winning is undefinable, one player always wins...)
If you vary the way rationality works too much, you end up with paradox, quodlibet etc.
If you vary the rules of meta ethics too much, you end up with anyone being allowed to do anything, or nobody being allowed to do anything.
Yes. I do want to be able to say murder is wrong. I should want to be able to say that. It’s a feature. not a bug.. What use is a new improved rationalised system of mathematics which can’t support 2+2=4?
Peter, how do you reconcile this statement with your statement such as the one’s here where you say that
I think most moral nihilists are not evil. But the point is that if he really does think murder is not wrong, he has a bad glitch in his thinking; and if he does think murder is wrong, but feels unable to say so, he has another glitch.
How can you say someone has a glitch if they simply aren’t adopting your system which you acknowledge is arbitrary?
By arbitrarily declaring what qualifies as a glitch. (Which is only partially arbitrary if you have information about typical or ‘intended’ behaviour for an agent.)
I said he has a glitch if he can’t see that murder is wrong. I didn’t say he had to arrive at it the way I arrived at it.. I am selling a meta ethical theory. I am not selling 1-st order system of morality like Roman Catholicism or something. I use core intuitions, common
to all 1st order systems, as test cased. If you can’t get them out of your metaethical principles, you are doing something wrong.
What use is a new improved rationalised system of mathematics which can’t support 2+2=4?
So morality is like chess, but there’s some sort of grounding for why we should play it? I am confused as to what your position is.
What use is a new improved rationalised system of mathematics which can’t support 2+2=4?
I’m not sure what you mean by that. If I’m following your analogy correctly then this is somewhat wrong. Any reasonable general philosophy of metamathematics would tell you that 2+2=4 is only true in certain axiomatic systems. For example, if I used as an axiomatic system all the axioms of ZFC but left out the axiom of infinity and the axiom of replacement, I cannot then show that + is a well-defined operation. But this is an interesting system which has been studied. Moreover, nothing in my metamathematics tells me that that I should be more interested in ZFC or Peano Arithmetic. I am more interested in those systems, but that’s due to cultural and environmental norms. And one could probably have a whole career studying weak systems where one cannot derive 2+2=4 for the most natural interpretations of “2”, “+”,”=” and “4” in that system.
To return to the original notion, just because a metaethical theory has to support that someone within their more and ethical framework has “murder is wrong” doesn’t mean that the metaethical system must consider that to be a non-arbitrary claim. This is similar to just because our metamathetical theory can handle 2+2=4 doesn’t mean it needs to assert that 2+2=4 in some abstract sense.
For example, if I used as an axiomatic system all the axioms of ZFC but left out the axiom of infinity and the axiom of replacement, I cannot then show that + is a well-defined operation.
I know this is a sidetrack, but I don’t think that’s right, unless we’re omitting the axiom of pairing as well. Can’t we use pairing to prove the finite version of replacement? (This needs an induction, but that doesn’t require the axioms of replacement or infinity.) Hence, can’t we show that addition of finite ordinals is well-defined, at least in the sense that we have a class Plus(x,y,z) satisfying the necessary properties?
(Actually, I think it ought to be possible to show that addition is well-defined even without pairing, because power set and separation alone (i.e. together with empty set and union) give us all hereditarily finite sets. Perhaps we can use them to prove that {x,y} exists when x and y are hereditarily finite.)
I know this is a sidetrack, but I don’t think that’s right, unless we’re omitting the axiom of pairing as well. Can’t we use pairing to prove the finite version of replacement? (This needs an induction, but that doesn’t require the axioms of replacement or infinity.)
If we don’t have the axiom of infinity then addition isn’t a function (since its domain and range aren’t necessarily sets).
Sure, in the sense that it’s not a set. But instead we can make do with a (possibly proper) “class”. We define a formula Plus(x,y,z) in the language of set theory (i.e. using nothing other than set equality and membership + logical operations), then we prove that for all finite ordinals x and y there exists a unique finite ordinal z such that Plus(x,y,z), and then we agree to use the notation x + y = z instead of Plus(x,y,z).
This is not an unusual situation in set theory. For instance, cardinal exponentiation and ‘functions’ like Aleph are really classes (i.e. formulas) rather than sets.
Yes. But in ZFC we can’t talk about classes. We can construct predicates that describe classes, but one needs to prove that those predicates make sense. Can we in this context we can show that Plus(x,y,z) is a well-defined predicate that acts like we expect addition to act (i.e. associative, commutative and has 0 as an identity)?
In practice we tend to throw them around even when working in ZFC, on the understanding that they’re just “syntactic sugar”. For instance, if f(x,y) is a formula such that for all x there exists unique y such that f(x,y), and phi is some formula then rather than write “there exists y such that f(x,y) and phi(y)” it’s much nicer to just write “phi(F(x))” even though strictly speaking there’s no such object as F.
Can we in this context we can show that Plus(x,y,z) is a well-defined predicate that acts like we expect addition to act (i.e. associative, commutative and has 0 as an identity)?
I think the proofs go through almost unchanged (once we prove ‘finite replacement’).
Well, we could define Plus(x,y,z) by “there exists a function f : x → z with successor(max(codomain(f))) = z, which preserves successorship and sends 0 to y”. (ETA: This only works if x > 0 though.)
And then we just need to do loads of inductions, but the basic induction schema is easy:
Suppose P(0) and for all finite ordinals n, P(n) implies P(n+1). Suppose ¬P(k). Let S = {finite ordinals n : ¬P(n) and n ⇐ k}. By the axiom of foundation, S has a smallest element m. Then ¬P(m). But then either m = 0 or P(m-1), yielding a contradiction in either case.
Sure, but we still have a “class”. “Classes” are either crude syntactic sugar for “formulae” (as in ZFC) or they’re a slightly more refined syntactic sugar for “formulae” (as in BGC). In either case, classes are ubiquitous—for instance, ordinal addition isn’t a function either, but we prove things about it just as if it was.
But if you are being rational, you should avoid contradictions—that is a norm. If you are being moral, you should avoid doing unto others what you would not wish done unto you. If you are playing chess, you should avoid placing the bishop on a square that cannot be reached diagonally from the current one. No-one makes you follow those rules, but there is a logical relationship between following the rules and playing the game: you cannot break the rules and still play the game. In that sense, you must follow the rules to stay in the game.
Are you asserting that being “moral” is just like a game with a set of agreed upon rules? That doesn’t fit with your earlier claims (e.g. this remark)and on top of that seems to run into the problems of people not agreeing what the rules are. Note incidentally, that it is extremely unlikely that any random intelligence will either know or have any desire to play chess. If you think the same applies about your notion of morality then there’s much less disagreement, but that doesn’t seem to be what you are asserting. I am confused.
Agreed upon? Most of the game is in making up rules and forcing them on others despite their disagreement!
Although that happens with other games also, when there’s a disagreement about the rules. It just seems to be a smaller fraction of the game and something that everyone tries to avoid. There are some games that explicitly lampshade this. The official rules of Munchkin say something like (paraphrase) ” in a rules dispute whoever shouts loudest is right.”
Although it would be unpleasant, I think, to be the loud guy at the party who no one wants to be there. Still, I overreacted, I think. It was a relatively small number of my posts that were received negatively, and absent an explanation of why they were, all I can do is work on refining my rationality and communication skills.
Edit: Downvoted lol
Further edit: This could be like rejection therapy for karma. Everyone downvote this post!
I’m arguing that there is a sense in which one “should” follow rules which has nothing to do with human-like agents laying down the law, thereby refuting NMJ’s attempt at a link between objective morality and theism.
There are constraints on the rules games can have (eg fairness, a clear winner after finite time).
There are constraints on rationality (eg avoidance of quodlibet).
Likewise, there are constraints on the rules of moral reasoning. (eg. people cannot just make up their own morality and do what they want). Note that I am talking about
metaethics here.
I’m arguing that there is a sense in which one “should” follow rules which has nothing to do with human-like agents laying down the law, thereby refuting NMJ’s attempt at a link between objective morality and theism.
So this is the exact opposite of a chess game. So what do you mean by your analogy?
Wow, that’s right up there with attempts to define God that end up defining God to be “the universe”, “the laws of physics”, or “mathematics”.
Yes, technically you can define God to be any of these things and any of the above definitions would make me a theist. However, I don’t think any of the above definitions are particularly helpful.
All I’m saying is that there needs to be an intelligence, some value-having agent or entity, in order for actions to be judged. If there is no intelligence, there are no values.
All I’m saying is that there needs to be an intelligence, some value-having agent or entity, in order for actions to be judged. If there is no intelligence, there are no values.
Judging requires an agent. But values does not. That just requires an object capable of representing information. The universe could have values built into it even without having intelligence to be judging with it. (Completely irrelevant observation.)
Judging requires an agent. But values does not. That just requires an object capable of representing information.
I see what you mean there, but without intelligence those values would be just static information. I don’t see how the moral realist’s conception of objective morality can make any sense without an intelligent agent.
I suppose, in connection with your point about “subjective objectivity” a moment ago, I can see how any set of values can be said to “exist” in the sense that one could measure reality against them.
Edit: That doesn’t seem to change anything ethically though. We can call it objective if we like, but to choose which of those we call “right” or “moral” depends entirely on the values and preferences of the querying agent.
I never said anything about religion. Here I’m merely pointing out wedrifid’s hypocrisy.
Sorry, to use such a strong word, but I honestly can’t see the difference between what wedrifid called peter logically rude for doing, and what wedrifid is doing in the ancestor.
Wedrifid never said dashing babies against rocks was “wrong in the cosmic eyes of the universe”. I suspect s/he values a culture that doesn’t do that. That would be his or her personal value / preference.
We don’t need to subscribe to your theory of absolute morals to prefer, support, or value things.
Wedrifid never said dashing babies against rocks was “wrong in the cosmic eyes of the universe”. I suspect s/he values a culture that doesn’t do that. That would be his or her personal value / preference.
That seems like a likely assumption. For a start it is terribly unhygenic. All brains, blood and gore left around the place. Lots of crows picking at little bits of baby. Flies and maggots. Ants. A recipe for spreading plague.
I don’t think that argument means what you think it does.
I don’t see how to interpret this exchange any other way.
Okay, that may be so—but as I indicated by my response, I do.
I assert that most if not all of the people on the ‘other side’ of the metaethics argument you are participating in believe that morality exists, but as a sociological phenomenon rather than a metaphysical one (as you seem also to believe), and furthermore that morality not being metaphysical “adds up to normality,” in the local parlance—it’s still reasonable to disapprove of murder, etc.
Yup.
Could an agent with different preferences from ours reasonably approve of murder?
Yes to that too.
I very, very, strongly disapprove of terrorism. Terrorists, of course, would disagree. There is no objective sense in which one of us can be “right”, unless you go out of your way to specifically define “right” as those actions which agree with one side or the other. The privileging of those actions as “right” still originates from the subjective values of whatever agent is judging.
Thanks, CuSithBell, I think you’ve done a good job of making the issue plain. It does indeed all add up to normality.
Glad to hear : )
There is a way in which someone can be wrong. If someone holds to a set of values that contains contradictions , they cannot claim to be right. Moral arguments in fact do often make appeals to consistency—“if you support equal rights for women, you should support equal rights for gays”
Our culture certainly does like to slap around those whose arguments are inconsistent… to the point that I suspect more consistent moral codes are consistent because the arguer is striving for consistency over truth than because they’ve discovered moral truths that happen to be consistent. We may have reached the point where consistent moral codes deserve more skepticism than inconsistent ones.
Yes, Eugine, we get it.
You’re a theist. You believe in a grand cosmic judge built into the universe which approves or disapproves of human action. We do not. Deal with it.
Edit: Sorry for any rudeness there, but you’ve now dragged this argument out of its original thread. Not to make any point, but to taunt.
I don’t know whether Eugine is a theist. I would have gone with “fundamentalist humanist”. There are certainly similarities between Eugine’s position and typical theists and the ‘built into the universe’ part seems about right.
I was doing some thinking after the thread earlier about whether or not one can be a moral realist without in some sense being a theist. I tried as hard as I could to phrase Peter and Eugine’s position non-theistically, with some variant of:
“There exists a set of preferences which all intelligent agents… ”
And I tried finishing the sentence with “are compelled by some force to adopt”. But that obviously isn’t true as there are extreme differences in preference among agents. I tried finishing the sentence with “should adopt”, but of course the word should contains the entire confusion all over again. Should adopt according to whom? , I was forced to ask.
The moral realist position is nonsensical without an intelligence against whose preferences you can check.
Try this:
There exists a set of maxims which all intelligent and social agents find it in their long-term interest to adhere to, even if adherence is not always in the agent’s short-term interest. These maxims are called a “moral code”, particularly when the long-term incentive to adhere to the code arises by way of the approval or disapproval of other agents who themselves adhere to the code.
This view of morality is a ‘moral realist’ rather than ‘moral conventionalist’ position when it is claimed that there is a theoretically best moral code, and that it may be morally permissible to flout the locally conventional moral code if that code is different from the best code.
I think this provides a variant of moral realism without invoking a Supreme Lawgiver. Kant’s categorical imperative was another (seriously flawed, IMO) attempt to derive some moral rules using no other assumptions beyond human rationality and social lifestyle. The moral realist position may well be incorrect, but it is not a nonsensical position for an atheist to take.
The position you describe is sensical, but it’s not what people (at least on LW) who think “moral realism” is nonsensical mean by “morality”. You’re not saying anything about ultimate ends (which I’m pretty sure is what NMJablonski, e.g., means by “preferences”); the version of “moral realism” that gets rejected is about certain terminal values being spookily metaphysically privileged.
Actually, I am saying something about ultimate ends, at least indirectly. My position only makes sense if long-term ultimate ends become somehow ‘spookily metaphysically privileged’ over short-term ultimate ends.
My position still contains ‘spookiness’ but it is perhaps a less arbitrary kind of ‘magic’ - I’m talking about time-scales rather than laws inscribed on tablets.
How is this different from “if the agents under consideration care about the long run more than the short run”?
Well, I am attempting to claim here that there exists an objective moral code (moral realism) which applies to all agents—both those who care about the long term and those who don’t. Agents who mostly care about the short term will probably be more ethically challenged than agents who easily and naturally defer their gratification. But, in this thread at least, I’m arguing that both short-sighted and long-sighted agents confront the same objective moral code. So, I apparently need to appeal to some irreducible spookiness to justify that long-term bias.
Interesting.
One possible attack: take the moral code and add “Notwithstanding all this, kill the humans.” to the end. This should be superior for all the remaining agents, since humans won’t be using up any resources after that’s accomplished (assuming we can’t put up enough of a fight).
A practical vulnerability to (perhaps unconsciously biased) self-interested gamers: untestable claims that although the moral code is at a local optimum, everyone needs to switch to a far away alternative, and then, after the new equilibrium, things will really be better. Sci-fi rejoinder: this explains why there are so many simulations :)
Yes, in a society with both human and non-human agents, if the humans contribute nothing at all to the non-humans, and only consume resources that might have been otherwise used, then the non-humans will judge the humans to be worthless. Worse than worthless. A drain to be eliminated.
But there is nothing special about my version of ethics in this regard. It is a problem that must be faced in any system of ethics. It is the FAI problem. Eliezer’s solution is apparently to tell the non-human agents as they are created “Humans are to be valued. And don’t you forget it when you self-modify.” I think that a better approach is to make sure that the humans actually contribute something tangible to the well-being of the non-humans.
Perhaps neither approach is totally safe in the long run.
Interestingly, it seems to me that this definition would produce meta-maxims more readily than normal ones—“find out what your local society’s norms are and stick to those” rather than “don’t eat babies”, for example.
Eliezer’s Babyeaters and Superhappies both seemed to follow that meta-rule. (It was less obvious for the latter, but the fact that they had complex societies with different people holding different set roles strongly implies it.)
Yes, it does produce meta-maxims in some sense. But meta-maxims can provide useful guidance. “Do unto others …” is a meta-maxim. So is “Don’t steal (anything from anybody)” or “Don’t lie (about anything to anyone)”
As to whether this definition leads to the meta-maxim “find out what your local society’s norms are and stick to those”, that is explicitly not a consequence of this definition. Instead, the meta-maxim would be “work out what your local society’s norms should be and set an example by doing that”.
Thanks for this. Your first paragraph looks very clean to me—not tendentious and even almost falsifiable. But can you tell me how I should understand “best” and “morally permissible” in the second paragraph? (Morally permissible according to the best maxims only, or morally permissible according to all maxims indicated in the first paragraph?)
‘Best’ here means most advantageous to the agents involved—both the focal agent, and the other agents who join in the convention. What I have in mind here is something like the Nash Bargaining Solution, which is both unique and (in some sense) optimal.
“Morally permissible” by the best code, which is also the code mentioned in the first paragraph. What I am claiming is that the ‘moral should’ can be reduced to the ‘practical self-interested should’ if (and it is a big if) it is reasonable to treat the other agents as rational and well-informed.
The “intelligent” qualification could mean just about anything, and sort functions as an applause light on LW, and the “social” part sounds kind of collectivist. Besides that, this is just called having a low time preference, and is a common self-help concept. I don’t see any need to label this morality.
I didn’t downvote, but you didn’t address the crypto-Kantian angle at all. So it seems like you stopped reading too soon.
You’re right, I did stop reading carefully toward the end because the Kantian stuff looked like it was just a side note. I’ve now read it carefully, but I still don’t see the significance.
I can see how with that definition of morality it could be sensibly theorized as objective. I don’t think that sentence is true, as there are many people (e.g. suicide bombers) whose evaluations of their long-term interest are significant outliers from other agents.
That’s right, but this exception (people whose interests are served by violating the moral norm) itself has a large exception, which is that throughout most of the suicide bomber’s life, he (rightly) respects the moral norm. Bad people can’t be bad every second of their lives—they have to behave themselves the vast majority of the time if for no other reason than to survive until the next opportunity to be bad. The suicide bomber has no interest in surviving once he presses the button, but for every second of his life prior to that, he has an interest in surviving.
And the would-be eventual suicide bomber also, through most of his life, has no choice but to enforce moral behavior in others if he wants to make it to his self-chosen appointment with death.
If we try to imagine someone who never respects recognizable norms—well, it’s hard to imagine, but for one thing they would probably make most “criminally insane” look perfectly normal and safe to be around by contrast.
Upvoted. The events you describe makes sense and your reasoning seems valid. Do you think, based upon any of our discussion, that we disagree on the substance of the issue in any way?
If so, what part of my map differs from yours?
I’m withholding judgment for now because I’m not sure if or where we differ on any specifics.
That one agent’s preferences differ greatly from the norm does not automatically make cooperation impossible. In a non-zero-sum game of perfect information, there is always a gain to be made by cooperating. Furthermore, it is usually possible to restructure the game so that it is no longer zero-sum.
For example, a society confronting a would-be suicide bomber will (morally and practically) incarcerate him, if it has the information and the power to do so. And, once thwarted from his primary goal, the would-be bomber may find that he now has some common interests with his captors. The game is no longer zero-sum.
So I don’t think that divergent interests are a fatal objection to my scheme. What may be fatal is that real-world games are not typically games with perfect information. Sometimes, in the real world, it is advantageous to lie about your capabilities, values, and intentions. At least advantageous in the short term. Maybe not in the long term.
Can’t I construct trivial examples where this is false? E.g. the one-by-two payoff matrices (0,100) and (1,-1).
That is a zero-sum game. (Linear transformations of the payoff matrices don’t change the game.)
It is also a game with only one player. Not really a game at all.
ETA: If you want to allow ‘games’ where only one ‘agent’ can act, then you can probably construct a non-zero-sum example by offering the active player three choices (A, B, and C). If the active player prefers A to B and B to C, and the passive player prefers B to C and C to A, then the game is non-zero-sum since they both prefer B to C.
I suppose there are cases like this in which what I would call the ‘cooperative’ solution can be reached without any cooperation—it is simply the dominant strategy for each active player. (A in the example above). But excluding that possibility, I don’t believe there are counterexamples.
Rather than telling me how my counterexample violates the spirit of what you meant, can you say what you mean more precisely? What you’re saying in 1. and 2. are literally false, even if I kind of (only kind of) see what you’re getting at.
When I make it precise, it is a tautology. Define a “strictly competitive game” as one in which all ‘pure outcomes’ (i.e. results of pure strategies by all players) are Pareto optimal. Then, in any game which is not ‘strictly competitive’, cooperation can result in an outcome that is Pareto optimal—i.e. better for both players than any outcome that can be achieved without cooperation.
The “counter-example” you supplied is ‘strictly competitive’. Some game theory authors take ‘strictly competitive’ to be synonymous with ‘zero sum’. Some, I now learn, do not.
I wasn’t arguing that cooperation is impossible. From everything you said there it looks like your understanding of morality is similar to mine:
Agents each judging possible outcomes based upon subjective values and taking actions to try to maximize those values, where the ideal strategy can vary between cooperation, competition, etc.
This makes sense I think when you say:
The members of that society do that because they prefer the outcome in which he does not suicide attack them, to one where he does.
This phrasing seems exactly right to me. The would-be bomber may elect to cooperate, but only if he feels that his long-term values are best fulfilled in that manor. It is also possible that the bomber will resent his captivity, and if released will try again to attack.
If his utility function assigns (carry out martyrdom operation against the great enemy) an astronomically higher value than his own survival or material comfort, it may be impossible for society to contrive circumstances in which he would agree to long term cooperation.
This sort of morality, where agents negotiate their actions based upon their self-interest and the impact of others actions, until they reach an equilibrium, makes perfect sense to me.
This seems as well-motivated as the position that nothing can exist without someone to create it. That is, it seems intuitively true to a human since we privilege agency, but I don’t see any contradiction, logical or otherwise, in having moral facts be real.
This question might reduce to the question of whether mathematical facts are “real”, which might not make any more sense. Is there a sense in which there are “two” rocks here, even if there were no agent to count the rocks? Is there a sense in which murder is wrong, even if there was never anyone to murder or observe murder?
I think the only difficulties here are definitional (suggested by the word “sense” above) and the proper thing to do with a definitional dispute is to dissolve it. Most moral realists hereabouts are some sort of relativists (that is, we take it to be a “miracle” that we care about what’s right rather than something else, and otherwise would have taken it to be a “miracle” that we care about something else instead of what’s right, but that doesn’t change what’s right).
I can understand what physical conditions you are describing when you say “two rocks”. What does it mean, in a concrete and substantive sense, for murder to be “wrong”?
I can give you two answers to this, one which maps better to this community and one which fits better with the virtue ethics tradition.
There exists (in the sense that mathematical functions exist) a utility function labeled ‘morality’ in which actions labeled ‘murder’ bring the universe into a state of lower-utility. I make no particular claims about the proper way to choose such a utility function, just that there is one that is properly called ‘morality’, and moral disputes can be characterized as either disputes over which function to call ‘morality’ or disputes over what the output of that function would be given certain inputs.
‘Good’ and ‘bad’ are always evaluated in terms of effects upon a particular thing; a good hammer is one which optimally pounds in nails, a good horse is fast and strong, and a good human experiences eudaimonia. Murder is the sort of thing that makes one a bad human; it makes one less virtuous and thus less able to experience eudaimonia.
It could be the case that the terms ‘good’, ‘bad’, and ‘eudaimonia’ should be evaluated based on the preferences of an agent. But in that case it that does not make it any less the case that moral facts are facts about the world that one could be wrong about. For instance, if I prefer to live, I should not drink drain cleaner. If I thought it was good to drink drain cleaner, I would be wrong according to my own preferences, and an outside agent with different preferences could tell me I was objectively wrong about what’s right for me to do.
As a side note, ‘murder’ is normative; it is tautologically wrong. Denying wrongness in general denies the existence of murder. It might be better to ask, “What does it mean for a particular sort of killing to be ‘wrong’?”, or else “What does it mean for a killing to be murder?”
Okay, we don’t disagree at all.
There is an objective sense in which actions have consequences. I am always surprised when people seem to think I’m denying this. Science works, there is a concrete and objective reality, and we can with varying degrees of accuracy predict outcomes with empirical study. Zero disagreement from me on that point.
So, we judge consequences of actions with our preferences. One can be empirically incorrect about what consequences an action can have, and if you choose to define “wrong” as those actions which reduce the utility of whatever function you happen to care about, then sure, we can determine that objectively too. All I am saying is that there is no objective method for selecting the function to use, and it seems like we’re in agreement on that.
Namely, we privilege utility functions which value human life only because of facts about our brains, as shaped by our genetics, evolution, and experiences. If an alien came along and saw humans as a pest to be eradicated, we could say:
“Exterminating us is wrong!”
… and the alien could say:
“LOL. No, silly humans. Exterminating you is right!”
And there is no sense in which either party has an objective “rightness” that the other lacks. They are each referring to the utility functions they care about.
Note that the definitional dispute rears its head in the case where the humans say, “Exterminating us is morally wrong!” in which case strong moral relativists insist the aliens should respond, “No, exterminating you is morally right!”, while moral realists insist the aliens should respond “We don’t care that it’s morally wrong—it’s shmorally right!”
There is also a breed of moral realist who insists that the aliens would have somehow also evolved to care about morality, as the Kantians who believe morality follows necessarily from basic reason. I think the burden of proof still falls on them for that, but unfortunately there aren’t many smart aliens to test.
The aliens could say it’s morally right. since no amount of realism/objectivism stops one being able to make false statements.
That doesn’t seem relevant. I was noting cases of what the aliens should say based on what they apparently wanted to communicate. I was thus assuming they were speaking truthfully in each case.
In other words, in a world where strong moral relativism was true, it would be true that the aliens were doing something morally right by exterminating humans according to “their morality”. In a world where moral realism is true, it would be false that the aliens were doing something morally right by exterminating humans, though it might still be the case that they’re doing something ‘shmorally’ right, where morality is something we care about and ‘shmorality’ is something they care about.
There is a sense in which one party is objectively wrong. The aliens do not want to be exterminated so they should not exterminate.
So, we’re working with thomblake’s definition of “wrong” as those actions which reduce utility for whatever function an agent happens to care about. The aliens care about themselves not being exterminated, but may actually assign very high utility to humans being wiped out.
Perhaps we would be viewed as pests, like rats or pigeons. Just as humans can assign utility to exterminating rats, the aliens could do so for us.
Exterminating humans has the objectively determinable outcome of reducing the utility in your subjectively privileged function.
Inasmuch as we are talking about objective rightness we are talking are not talking about utility functions, because not everyone is running of the same utility function, and it makes sense to say some UFs are objectively wrong.
What would it mean for a utility function to be objectively wrong? How would one determine that a utility function has the property of “wrongness”?
Please, do not answer “by reasoning about it” unless you are willing to provide that reasoning.
I did provide the reasoning in the alien example.
Let’s break this all the way down. Can you give me your thesis?
I mean, I see there is a claim here:
… of the format (X therefore Y). I can understand what the (X) part of it means: aliens with a preference not to be destroyed. Now the (Y) part is a little murky. You’re saying that the truth of X implies that they “should not exterminate”. What does the word should mean there?
It means universalisable rules.
You’re signalling to me right now that you have no desire to have a productive conversation. I don’t know if you’re meaning to do that, but I’m not going to keep asking questions if it seems like you have no intent to answer them.
I.m busy, I’ve answered it several times before, and you can look it up yourself, eg;
“Now we can return to the “special something” that makes a maxim a moral maxim. For Kant it was the maxim’s universalizability. (Note that universalizability is a fundamentally different concept than universality, which refers to the fact that some thing or concept not only should be found everywhere but actually is. However, the two concepts sometimes flow into each other: human rights are said to be universal not in the sense that they are actually conceptualized and respected in all cultures but rather in the sense that reason requires that they should be. And this is a moral “should.”) However, in the course of developing this idea, Kant actually developed several formulations of the Categorical Imperative, all of which turn on the idea of universalizability. Commentators usually list the following five versions:
“Act only according to a maximum that at the same time you could will that it should become a universal law.” In other words, a moral maxim is one that any rationally consistent human being would want to adopt and have others adopt it. The above-mentioned maxim of lying when doing so is to one’s advantage fails this test, since if there were a rule that everyone should lie under such circumstances no one would believe them – which of course is utterly incoherent. Such a maximum destroys the very point of lying.
“Act as if the maxim directing your action should be converted, by your will, into a universal law of nature.” The first version showed that immoral maxims are logically incoherent. The phrase “as if” in this second formulation shows that they are also untenable on empirical grounds. Quite simply, no one would ever want to live in a world that was by its very nature populated only by people living according to immoral maxims.
“Act in a way that treats all humanity, yourself and all others, always as an end, and never simply as a means.” The point here is that to be moral a maxim must be oriented toward the preservation, protection and safeguarding of all human beings, simply because they are beings which are intrinsically valuable, that is to say ends in themselves. Of course much cooperative activity involves “using” others in the weak sense of getting help from them, but moral cooperation always includes the recognition that those who help us are also persons like ourselves and not mere tools to be used to further our own ends.
“Act in a way that your will can regard itself at the same time as making universal law through its maxim.” This version is much like the first one, but it adds the important link between morality and personal autonomy: when we act morally we are actually making the moral law that we follow.
“Act as if by means of your maxims, you were always acting as universal legislator, in a possible kingdom of ends.” Finally, the maxim must be acceptable as a norm or law in a possible kingdom of ends. This formulation brings together the ideas of legislative rationality, universalizability, and autonomy. ”
You mean, “The aliens do not want to be exterminated, so the aliens would prefer that the maxim ‘exterminate X’, when universally quantified over all X, not be universally adhered to.”?
Well… so what? I assume the aliens don’t care about universalisable rules, since they’re in the process of exterminating humanity, and I see no reason to care about such either. What makes this more ‘objective’ than, say, sorting pebbles into correct heaps?
What is eudaimonia for...or does the buck stop there?
And tautologies and other apriori procedures can deliver epistemic objectivity without the need for the any appeal to quasi empiricsim.
It was originally defined as where the buck stops. To Aristotle, infinite chains of justification were obviously no good, so the ultimate good was simply that which all other goods were ultimately for.
Regardless of how well that notion stands up, there is a sense in which ‘being a good hammer’ is not for anything else, but the hammer itself is still for something and serves its purpose better when it’s good. Those things are usually unpacked nowadays from the perspective of some particular agent.
A good hammer is good for whatever hammers are for. You could hardly have a clearer example of an instrumental good. And your claim that all goods are for something is undermined by the way you are handling eudaimonia.
Yes, that’s what I said above:
I don’t think it is. I did not say I agreed with Aristotle that this sort of infinite regress is bad. Eudaimonia is the good life for me. All other things that are good for me are good in that they are part of the good life. It is the case that I should do what is best for me. As a side effect, being a good human makes me good for all sorts of things that I don’t necessarily care about.
This probably reduces to a statement about my preferences / utility function, as long as those things are defined in the ‘extrapolated’ manner. That is, even if I thought it were the case that I should drink drain cleaner, and I then drank drain cleaner, it was still the case that I preferred not to drink drain cleaner and only did so because I was mistaken about a question of fact. This does not accord well with the usual English meaning of ‘preference’.
Full disclosure: Am a moral egoist.
And are you good for something, or good for nothing :-) ?
That is hardly uncontentious...
...but you probably know that.
I answered that:
What does it mean in a concrete and substantive sense for pi to be an irrational number?
This is doable… Let d be the length of the diameter of some circle, and c be the circumference of the same circle. Then if you have an integer number (m) of sticks of length d in a straight line, and an integer number (n) of sticks of length c in a different straight line, then the two lines will be of different lengths, no matter how you choose your circle, or how you choose the two integers m and n.
In general, if the axioms that prove a theorem are demonstrable in a concrete and substantive way, then any theorems proved by them should be similarly demonstrable, by deconstructing it into its component axioms. But I could be missing something.
There are sets of axioms that aren’t really demonstrable in the physical universe, that mathematicians use, and there are different sets of axioms where different truths hold, ones that are not in line with the way the universe works. Non-euclidean geometry, for example, in which two parallel lines can cross. Any theorem is true only in terms of the axioms that prove it, and the only reason why we attribute certain axioms to this universe is because we can test them and the universe always works the way the axiom predicts.
For morality, you can determine right and wrong from a societal/cultural context, with a set of “axioms” for a given society. But I have no idea how you’d test the universe to see if those cultural “axioms” are “true”, like you can for mathematical ones. I don’t see any reason why the universe should have such axioms.
This is not doable concretely because you can only measure down to some precision.
It is surprising to find someone on a site dedicated to the Art of Rationality who cannot imagine impersonal norms, since rationality is a set of impersonal norms. You are not compelled to be rational, to be moral, or to play chess. But if you are being rational, you should avoid contradictions—that is a norm. If you are being moral, you should avoid doing unto others what you would not wish done unto you. If you are playing chess, you should avoid placing the bishop on a square that cannot be reached diagonally from the current one. No-one makes you follow those rules, but there is a logical relationship between following the rules and playing the game: you cannot break the rules and still play the game. In that sense, you must follow the rules to stay in the game. But that is not an edict coming from a person or Person.
ETA: You may well need agents to have values. I don’t require morality to be free of values.
To expound on JoshuaZ’s point, would chess have rules even if there were no minds?
Are the rules of chess objective and independent of anyone actually, y’know, knowing them?
ETA: Furthermore, if two agents in a place where chess has never existed come across a chess set, and they have a disagreement about what the rules might be, is one of them right and the other wrong?
The rules of chess would certainly exist, as much as any other mathematical object does. Of course, not every mind would care to follow them...
One could identify many different sets of rules for chess mathematically, but is one of them objectively the “correct” set of rules?
Or does selecting a set of rules from the possibilities always require the action of a subjective mind?
Edit: Also...
… that’s a whole other rabbit hole, no?
There’s exactly one function which is objectively the function which returns 1 for just those moves where a pawn moves one space, or two spaces on the first move, or the bishop moves diagonally, or the king moves one space, where none of the moves intersect other pieces without capturing, etc...
As I said, not every mind will care to evaluate this function.
As for whether mathematical objects exist… is this important? It really adds up to the same thing, either way.
(By the way, have you read the metaethics sequence?)
Yes, of course. I wasn’t arguing anything else. The person I was contending with is a moral realist, who would say that the function which represents those rules, the rules under which we play chess now, is the correct set of rules, and that this correctness is objective and independent of the minds of chess players.
This person would, I presume, argue that if suddenly every chess player in the world at the same time agreed to eliminate en passant from the game of chess, that they would then be playing the game “wrong”.
That is the position which I find nonsensical. I’m not arguing for anything bizarre here. I’m a Bayesian rationalist and a reductionist and yes I have read the sequences.
I explained the point I was making and that wasn’t it: The point was what obligation/compulsion means. It doesn’t mean it is physically impossible to do what is morally forbidden. It doesn’t mean it is an edict you will be punished for disobeying. It does mean that it is logically impossible to be moral (or rational or a chess player) after having significantly departed from the rules.
They would be playing a different game..chess 2.0 or chess++. Plainly, you can’t have one player using the revised rules and her opponent the old ones.
With your chess analogy, those moves that are forbidden are set by human minds and decisions. The game of chess itself is a product of human intelligence, and they can change the rules over time, and indeed they have.
Are you saying that morality works the same way? That what is morally forbidden are those things which most people find objectionable / assign negative value to?
Dude, you just said a minute ago that the word “chess” could be a family of different but related rulesets when I asked about castling, but now when it comes to changing en passant the game becomes something else entirely? I think you should respond to my question on that thread about a precise explanation of what you mean by “chess”, as I cannot figure out why some things count and others do not.
If you vary the way games work too much, you end with useless non-games (winning is undefinable, one player always wins...) If you vary the way rationality works too much, you end up with paradox, quodlibet etc. If you vary the rules of meta ethics too much, you end up with anyone being allowed to do anything, or nobody being allowed to do anything. “The rules are made up” doesn’t mean the rules are abitrary.
There is a family of chess-type games, and they are different games, because they are not intersubstitutable.
I find that perplexing. Perhaps you mean many sets of rules could be used to play games with chess boards and pieces. But they are not all chess. Chess is its rules. Same rules+differrent pieces=same game. Different rules+same pieces=different game.
This isn’t strictly speaking true. Note that there have many different games called chess. For example, pawns being able to move 2 squares on their first move, en passant, castling, and the queen being able to move as she can, are all recent innovations. But let’s put that aside and explore your analogy. If there’s one thing called “morality” then I fail to see how that isn’t but one game among many. You seem to be treating morality like chess (in that there’s an objective thing that is or is not chess) but are then bringing along for the particular game called “morality” all sorts of assumptions about whether or not people should play it or expect to play it. This seems akin to asserting that because there’s only one objective game called “chess” that entities “should” play it.
In Italy one can still find older chess players who use an alternative castling rule, from when castling was first being introduced, called “free castling” in which the rook can take any of the squares between itself and the king, or the king’s position, rather than the single permitted position (depending on the side) of the more common castling rules we play with today.
Is one of these versions the “correct” way to play chess? Or does it depend entirely on the subjective viewpoint of the chess players?
Which way is the correct way to play “chess” depends on which definition of the word chess you are using. In general, we resolve ambiguities like that by looking at the speaker’s intent. (The speaker does not have to be one of the players.)
Yes, I know that. I’m asking rhetorical questions to Peter who is a moral realist.
Chess might be a small and closely related family of rule-sets. That doesn’t affect anything.
Alright, this is your analogy, and instead of dancing around and arguing definitions can you explain, in precise terms, what you mean when you say chess?
I’ll take the bait: A place where the idea of chess had never been thought of couldn’t, by definition, contain a chess set. It could contain an artifact physically identical to one and which we could call one if we weren’t being precise, but it would have to have come from some origin besides humans intending to build a chess set. Thus would be perfectly reasonable for the two to speculate (and be right or wrong) about what (if any) use it was meant for by its actual builders (if any).
Do you seriously mean to imply that something identical to chess set is not a chess set? The words “chess set” as I used them above are meant only to connect to a physical object, not intentions.
In practical terms I agree completely. My argument with Peter wasn’t actually about chess though, so it doesn’t make a ton of sense when you focus on particulars of the analogy, especially an analogy so flawed as this one.
Do you think we disagree on any issue of substance? If so, where?
A duplicate of the Mona Lisa wouldn’t be the Mona Lisa.
Have you read the quantum physics sequence? Are you familiar with the experimental evidence on particle indistinguishability?
I’m familiar with it anyway. The point is that things like history, prvenance and cultural significance are built into the way we think about things, part of the connotational cloud. That doesn;t contradict QM, but it does schemes to lossllessly reduce meaning to physics.
Nothing I have to say about morality or metaethics hinges on that one way or the other.
Then clearly we have badly miscommunicated.
I’m sorry Peter, but I do not subscribe to your flights of non-empirical philosophy. We have been around this over and over. I could sit here and explain to you how the chess analogy fails, how the rationality analogy fails, et cetera and so on.
You embody Hollywood rationality. Your conception of belief and thought is entirely verbal and anthropocentric. You focus too hard on philosophy and not enough on physics. As a result, you can be seen almost palpably grasping at straws.
I avoid doing unto others what I would not wish done unto me because that policy, when shared by social animals like humans, leads to results I prefer. I have evolved that preference. I voluntarily cooperate because it is in my direct interest, i.e. fulfills my values and preferences.
I would find it easier to believe you could if you had.
Reasoning about preferences is still reasoning.
I don’t deny that you can reason about preferences. All I’m saying is that to make a decision about whether to keep, discard, or modify your own preferences, the only metrics you have to check against are your own existing values and preferences. There are no moral facts out there in the universe to check against.
Do you disagree?
It turns out that I couldn’t walk away so easily, and so I, and several others, have.
Next I’ll be saying that mathematicians can come up with objectively true theorems without checking them against Paul Erdos’s Book..
Values and preferences can be re-evaluated according to norms of rationality, such as consistency. We generally deem the outputs of reasoning processes to be objective, even absent the existence of a domain of things to be checked against.
First, that book only has the elegant proofs. Second this totally misses the point: whether a statement is a theorem of a given formal system is objectively true or false is a distinct claim from the claim that some set of axioms is objectively a set of axioms that is somehow worth paying attention to. Even if two mathematicians disagree about whether or not one should include the Axiom of Choice in set theory, they’ll both agree that doing so is equivalent to including Zorn’s Lemma.
You aren’t just claiming that there are “theorems” from some set of moral axioms, but seem to be claiming that some sets of axioms are intrinsically better than others. You keep making this sort of leap and keep giving it no substantial justification other than the apparent reasoning that you want to be able to say things like “Gandhi was good” or “genocide is bad” and feel that there’s objective weight behind it. And we all empathize with that desire, but that doesn’t make those statements more valid in any objective sense.
I haven’t said anything is intriniscally better: I have argued that the choice of basic principles in maths, morlaity, etc is constrained by what we expect to be able to do with those things.
If you vary the way games work too much, you end with useless non-games (winning is undefinable, one player always wins...) If you vary the way rationality works too much, you end up with paradox, quodlibet etc. If you vary the rules of meta ethics too much, you end up with anyone being allowed to do anything, or nobody being allowed to do anything.
Yes. I do want to be able to say murder is wrong. I should want to be able to say that. It’s a feature. not a bug.. What use is a new improved rationalised system of mathematics which can’t support 2+2=4?
Peter, how do you reconcile this statement with your statement such as the one’s here where you say that
I don’t see the problem. What needs reconciling with what?
How can you say someone has a glitch if they simply aren’t adopting your system which you acknowledge is arbitrary?
By arbitrarily declaring what qualifies as a glitch. (Which is only partially arbitrary if you have information about typical or ‘intended’ behaviour for an agent.)
Yet again: I never said morality was arbitrary.
I said he has a glitch if he can’t see that murder is wrong. I didn’t say he had to arrive at it the way I arrived at it.. I am selling a meta ethical theory. I am not selling 1-st order system of morality like Roman Catholicism or something. I use core intuitions, common to all 1st order systems, as test cased. If you can’t get them out of your metaethical principles, you are doing something wrong.
What use is a new improved rationalised system of mathematics which can’t support 2+2=4?
So morality is like chess, but there’s some sort of grounding for why we should play it? I am confused as to what your position is.
I’m not sure what you mean by that. If I’m following your analogy correctly then this is somewhat wrong. Any reasonable general philosophy of metamathematics would tell you that 2+2=4 is only true in certain axiomatic systems. For example, if I used as an axiomatic system all the axioms of ZFC but left out the axiom of infinity and the axiom of replacement, I cannot then show that + is a well-defined operation. But this is an interesting system which has been studied. Moreover, nothing in my metamathematics tells me that that I should be more interested in ZFC or Peano Arithmetic. I am more interested in those systems, but that’s due to cultural and environmental norms. And one could probably have a whole career studying weak systems where one cannot derive 2+2=4 for the most natural interpretations of “2”, “+”,”=” and “4” in that system.
To return to the original notion, just because a metaethical theory has to support that someone within their more and ethical framework has “murder is wrong” doesn’t mean that the metaethical system must consider that to be a non-arbitrary claim. This is similar to just because our metamathetical theory can handle 2+2=4 doesn’t mean it needs to assert that 2+2=4 in some abstract sense.
I know this is a sidetrack, but I don’t think that’s right, unless we’re omitting the axiom of pairing as well. Can’t we use pairing to prove the finite version of replacement? (This needs an induction, but that doesn’t require the axioms of replacement or infinity.) Hence, can’t we show that addition of finite ordinals is well-defined, at least in the sense that we have a class Plus(x,y,z) satisfying the necessary properties?
(Actually, I think it ought to be possible to show that addition is well-defined even without pairing, because power set and separation alone (i.e. together with empty set and union) give us all hereditarily finite sets. Perhaps we can use them to prove that {x,y} exists when x and y are hereditarily finite.)
If we don’t have the axiom of infinity then addition isn’t a function (since its domain and range aren’t necessarily sets).
Sure, in the sense that it’s not a set. But instead we can make do with a (possibly proper) “class”. We define a formula Plus(x,y,z) in the language of set theory (i.e. using nothing other than set equality and membership + logical operations), then we prove that for all finite ordinals x and y there exists a unique finite ordinal z such that Plus(x,y,z), and then we agree to use the notation x + y = z instead of Plus(x,y,z).
This is not an unusual situation in set theory. For instance, cardinal exponentiation and ‘functions’ like Aleph are really classes (i.e. formulas) rather than sets.
Yes. But in ZFC we can’t talk about classes. We can construct predicates that describe classes, but one needs to prove that those predicates make sense. Can we in this context we can show that Plus(x,y,z) is a well-defined predicate that acts like we expect addition to act (i.e. associative, commutative and has 0 as an identity)?
In practice we tend to throw them around even when working in ZFC, on the understanding that they’re just “syntactic sugar”. For instance, if f(x,y) is a formula such that for all x there exists unique y such that f(x,y), and phi is some formula then rather than write “there exists y such that f(x,y) and phi(y)” it’s much nicer to just write “phi(F(x))” even though strictly speaking there’s no such object as F.
I think the proofs go through almost unchanged (once we prove ‘finite replacement’).
I’m not as confident but foundations is very much not my area of expertise. I’ll try to work out the details and see if I run into any issues.
Well, we could define Plus(x,y,z) by “there exists a function f : x → z with successor(max(codomain(f))) = z, which preserves successorship and sends 0 to y”. (ETA: This only works if x > 0 though.)
And then we just need to do loads of inductions, but the basic induction schema is easy:
Suppose P(0) and for all finite ordinals n, P(n) implies P(n+1). Suppose ¬P(k). Let S = {finite ordinals n : ¬P(n) and n ⇐ k}. By the axiom of foundation, S has a smallest element m. Then ¬P(m). But then either m = 0 or P(m-1), yielding a contradiction in either case.
Yes, this seems to work.
Sure, but we still have a “class”. “Classes” are either crude syntactic sugar for “formulae” (as in ZFC) or they’re a slightly more refined syntactic sugar for “formulae” (as in BGC). In either case, classes are ubiquitous—for instance, ordinal addition isn’t a function either, but we prove things about it just as if it was.
Are you asserting that being “moral” is just like a game with a set of agreed upon rules? That doesn’t fit with your earlier claims (e.g. this remark)and on top of that seems to run into the problems of people not agreeing what the rules are. Note incidentally, that it is extremely unlikely that any random intelligence will either know or have any desire to play chess. If you think the same applies about your notion of morality then there’s much less disagreement, but that doesn’t seem to be what you are asserting. I am confused.
Agreed upon? Most of the game is in making up rules and forcing them on others despite their disagreement!
Although that happens with other games also, when there’s a disagreement about the rules. It just seems to be a smaller fraction of the game and something that everyone tries to avoid. There are some games that explicitly lampshade this. The official rules of Munchkin say something like (paraphrase) ” in a rules dispute whoever shouts loudest is right.”
Upvoted for levity! Whew, we needed it.
I am tremendously confused as to why this and the parent were downvoted. Clearly I should just stop posting.
Or quit caring about the voting system.
That is also an option.
Although it would be unpleasant, I think, to be the loud guy at the party who no one wants to be there. Still, I overreacted, I think. It was a relatively small number of my posts that were received negatively, and absent an explanation of why they were, all I can do is work on refining my rationality and communication skills.
Edit: Downvoted lol
Further edit: This could be like rejection therapy for karma. Everyone downvote this post!
I’m arguing that there is a sense in which one “should” follow rules which has nothing to do with human-like agents laying down the law, thereby refuting NMJ’s attempt at a link between objective morality and theism.
There are constraints on the rules games can have (eg fairness, a clear winner after finite time). There are constraints on rationality (eg avoidance of quodlibet). Likewise, there are constraints on the rules of moral reasoning. (eg. people cannot just make up their own morality and do what they want). Note that I am talking about metaethics here.
So this is the exact opposite of a chess game. So what do you mean by your analogy?
Wow, that’s right up there with attempts to define God that end up defining God to be “the universe”, “the laws of physics”, or “mathematics”.
Yes, technically you can define God to be any of these things and any of the above definitions would make me a theist. However, I don’t think any of the above definitions are particularly helpful.
All I’m saying is that there needs to be an intelligence, some value-having agent or entity, in order for actions to be judged. If there is no intelligence, there are no values.
Judging requires an agent. But values does not. That just requires an object capable of representing information. The universe could have values built into it even without having intelligence to be judging with it. (Completely irrelevant observation.)
I see what you mean there, but without intelligence those values would be just static information. I don’t see how the moral realist’s conception of objective morality can make any sense without an intelligent agent.
I suppose, in connection with your point about “subjective objectivity” a moment ago, I can see how any set of values can be said to “exist” in the sense that one could measure reality against them.
Edit: That doesn’t seem to change anything ethically though. We can call it objective if we like, but to choose which of those we call “right” or “moral” depends entirely on the values and preferences of the querying agent.
And if someone changes their values of preferences as a result of exhortation or self-reflection...what do values and preferences then depend on?
The physical change in a mind over time, i.e. cognition.
If someone chooses, or would choose, to adopt a new value or preference, they do so by referring to their existing value / preference network.
I’ve considered this idea before, but I can’t imagine what if anything it would actually entail.
In fact judging only requires an optimization process. Not all optimization processes are agents or intelligent.
It doesn’t even need to be an optimization process. Just a process.
I don’t see how that follows.
My position is more-or-less the one argued by Marius in this thread. Especially the second posibility in this comment.
Also, I think it would be better to take all further discussion into that thread.
I never said anything about religion. Here I’m merely pointing out wedrifid’s hypocrisy.
Sorry, to use such a strong word, but I honestly can’t see the difference between what wedrifid called peter logically rude for doing, and what wedrifid is doing in the ancestor.
Wedrifid never said dashing babies against rocks was “wrong in the cosmic eyes of the universe”. I suspect s/he values a culture that doesn’t do that. That would be his or her personal value / preference.
We don’t need to subscribe to your theory of absolute morals to prefer, support, or value things.
That seems like a likely assumption. For a start it is terribly unhygenic. All brains, blood and gore left around the place. Lots of crows picking at little bits of baby. Flies and maggots. Ants. A recipe for spreading plague.