I believe that there is an objective system of verifiable, moral facts which can be true or false. …- Since human minds are part of objective reality, they can be analyzed and objective, verifiable propositions can be stated about them.
But those “objective” facts would only be about the intuitions of individual minds,
and then formulate equations to calculate how moral something is for a particular agent.
Same problem. A thinks it is moral to kill B, B thinks it is not moral to be killed by A. Where is the objective moral
fact there? Objective moral facts (or at least intersubjective ones) need to resolve conflicts between individuals. You have offered nothing that can do that.. Morality cannot just be a case of what an individual should do, because indiviuals interact.
However, if one is to ask a moral question without including a specific group-referent (though usually, “all humans” or “most humans” is implicit) from which one can extract that objective algorithm that makes things moral or not, then there is no “final word” or “ultimate truth” about which answer is right, and in fact the question seems hopelessly self-contradictory to me
Then morlaity is not so objective that it is graven into the very fabric of the universe. The problem remains that what you have presented is too subjective to do anything useful. By all means present a theory of human morality that is indexed to humans, but let it regulate interactions between humans.
However, what opinions people generate and what turns out to be objectively moral are correlated, but from a third cause—one that is still a black box which we cannot describe very accurately (
That is hard to inpterpret. Why should opinions be what is “objectively moral”? You might mean there is nothing more
to morality than people’s jugements about what is good or bad, but that is not an objective feature of the universe, it is mind projection. That the neural mechanisms involved are objective does not make what is projected by them objective. If objective neural activity makes me dream of unicorns, unicorns are not thereby objective.
And in any case, what is important is co-ordinating the judgements of individuals in the case of conflict.
Since humans humans can influence the state of reality, and there’s an algorithm somewhere that determines what we find moral,
“We” individually, or “we” collectively? That is a very important point to skate over.
and humans “prefer” things that are moral (are programmed to act in a way that brings about higher quantities of this “moral” stuff), then if they do things which probably lead to more of it, they prefer that result, and if otherwise, they would have preferred that first result. It follows from this that humans should do things which (probably) lead to higher values of this moral stuff.”
THat seems to be saying that it is instrumentally in people’s interests to be moral. But if that were always straightforwardly the case, then there would be no issues of sacrifices and self-restraint involve in morality,
which is scarcely credible. If I lay down my life for my country, that might lead to the greater good, but how good is it for me? The issue is much more complex than you have stated.
(part 2 of two-part response, see below or above for the first)
THat seems to be saying that it is instrumentally in people’s interests to be moral. But if that were always straightforwardly the case, then there would be no issues of sacrifices and self-restraint involve in morality, which is scarcely credible.
The key point is that “morality” isn’t straightforwardly “what people want” at all. What people consider moral when they evaluate all the information available to them and what people actually do (even with that information available) are often completely different things.
Note also that context and complicated conditionals become involved in Real Issues™. To throw out a toy example:
Julie might find it moral to kill three humans because she values the author of this post saying “Shenanigans” out loud only a bit less than their lives, and the author has committed to saying it three times out loud for each imaginary person dead in this toy example. However, Jack doesn’t want those humans dead, and has credibly signaled that he will be miserable forever if those three people die. Jack also doesn’t care about me saying “Shenanigans”.
Thus, because Julie cares about Jack’s morality (most humans, I assume, have values in their morality for “what other people of my tribe consider moral or wrong”), she will “make a personal sacrifice and use self-restrain” to not kill the three nameless, fortunate toy humans. The naive run of her morality over the immediate results says “Bah! Things could have been more fun.”, but game-theoretically she gains an advantage in the long term—Jack now cooperates with her, which means she incurs far less losses overall and still gains some value from her own people-alive moral counter and from Jack’s people-alive moral counter as well.
If I lay down my life for my country, that might lead to the greater good, but how good is it for me? The issue is much more complex than you have stated.
I think you are vastly confusing “good”, “greater good”, and “good for me”. These need to be tabooed and reduced. Again, example time:
Tom the toy soldier cares about his life. Tom cares about the lives of his comrades. Tom cares about the continuation of the social system that can be summarized as “his country”.
If Tom dies without any reason or effect, this is clearly bad. However, Tom values the end of his country as 1⁄2 of his life. So far, he’s still not going to die for it. Tom also values each comrade life at 1/10th of his life. Still not going to die for his country. Tom also knows that the end of his country means 95% chance that 200 of his comrades will die, with the other 5% they all live. If the country does not end, there’s a 50% chance that 100 of his comrades will die anyway, with 50% they live.
If Tom lives, there is 95% chance (as far as Tom knows, to his evidence, etc. etc.) that the country will end. If Tom sacrifices himself, the country is saved (with “certainty”, usual disclaimers etc. etc.).
So if Tom lives, Tom’s values go to −1/2 plus .95 chance of .95 chance of −20. If Tom sacrifices himself, the currently-alive Tom values this at −1 plus .5 chance of −10. Values are in negative utility only for simplicity of calculation, but this could be described at length in any other system you want (with a bit more effort though).
So the expected utility comes out at −18.55 if Tom lives, and −6 if Tom sacrifices himself, since Tom is a magical toy human and isn’t biased in any way and always shuts up and calculates and always knows exactly his own morality. So knowing all of this, Tom lays down his life for his country and what he would think of as “the greater good”.
I really don’t see how I’ve excluded this or somehow claimed that all of this was magically whisked away by any of what I said.
Overall, I think the only substantive disagreement we had is in your assessment that I didn’t think of / say anything useful towards solving interpersonal moral conflicts (I’m pretty sure I did, but mostly implicitly). I think the issue of what “morality” is for is entirely an empty word problem and should be ignored.
I’ll gladly attempt to reduce or taboo for reasonable requests to do so. If you think there are other issues we disagree on, I’d like them to be said. However, I would much appreciate efforts to avoid logical rudeness, and would also greatly appreciate if in further responses you (or anyone else replying) assumed that I haven’t thought through this only at the single-tier, naive level without giving this much more than five minutes of thought.
Or, to rephrase positively: Please assume you’re speaking to someone who has thought of most of the obvious implications, has thought about this for a very considerable amount of time, has done some careful research, and thinks that this all adds up to normality.
So knowing all of this, Tom lays down his life for his country and what he would think of as “the greater good”.
Tom will sacrifice himself if his values lead him too, and not if they don’t. He might desert or turn traitor. You would
still call that all moral because it is an output of the neurological module you have labelled “moral”.
I think the issue of what “morality” is for is entirely an empty word problem and should be ignored.
I think it isn’t. If someone tries to persuade you that you are wrong about morality, it is useful to consider the “what is morality for” question.
and thinks that this all adds up to normality.
Do you think any of this adds up to any extent of a solution to the philosphical problems of morality/ethics?
Tom will sacrifice himself if his values lead him too, and not if they don’t. He might desert or turn traitor. You would still call that all moral because it is an output of the neurological module you have labelled “moral”.
Yes!
...
.
(this space intentionally left blank)
.
.
Do you think any of this adds up to any extent of a solution to the philosphical problems of morality/ethics?
What specific philosophical problems? Because yes, it does help me clarify my thoughts and figure out better methods of arriving at solutions.
Does it directly provide solutions to some as-yet-unstated philosophical problems? Well, probably not, since the search space of possible philosophical problems related to morality or ethics is pretty, well, huge. The odds that my current writings provide a direct solution to any given random one of them are pretty low.
If the question is whether or not my current belief network contains answers to all philosophical problems pertaining to morality and ethics, then a resounding no. Is it flabbergasted by many of the debates and many of the questions still being asked, and does it consider many of them mysterious and pointless? A resounding yes.
Consequentualism versus deontology, objectivism versus subjectivism, as in the context.
Oh. Yep.
As I said originally, both of those “X versus Y” and many others are just confusing and mysterious-sounding to me.
They seem like the difference between Car.Accelerate() and AccelerateObject(Car) in programming. Different implementations, some slightly more efficient for some circumstances than others, and both executing the same effective algorithm—the car object goes faster.
Any would be good Metaethics is sometimes touted as a solve problem on LW.
Oh. Well, yeah, it does sound kind-of solved.
Judging by the wikipedia description of “meta-ethics” and the examples it gives, I find the meta-ethics sequence on LW gives me more than satisfactory answers to all of those questions.
You seem to misunderstand most of my beliefs, so I’ll try to address that first before I go any further to avoid confusion.
But those “objective” facts would only be about the intuitions of individual minds,
No. Just no. No no no no no no no no no no no no no. NO! NO!
The objective fact is that there is a brain made mostly of neurons and synapses and blood and other kinds of juicy squishyness inside which a certain bundle of those synapses is set in a certain particularly complex (as far as we know) arrangement, and when something is sent as input to that bundle of synapses of the form “Kill this child?”, the bundle sends queries to other bundles: “Benefits?” “People who die if child lives?” “Hungry?” “Have we had sex recently?” “Is the child real?” etc.
Then, an output is produced, “KILLING CHILD IS WRONG” or “KILLING CHILD IS OKAY HERE”.
Human consciousnesses, the “you” that is you and that wouldn’t randomly decide to start masturbating in public while sleepwalking (you don’t want to be the guy whom that happened to, seriously), doesn’t have access to the whole thing that the bundle of synapses called “morality” inside the brain actually does. It only has output, and sometimes glimpses of some of the queries that the bundle sent to other bundles.
In other words, intuitions.
What I refer to as an “objective fact”, the “objective” morality of that individual, is the entire sum of the process, the entire bundle + reviewing by conscious mind on each individual process + what the conscious mind would want to fix in order to be even more moral by the morals of the same bundle of synapses (i.e. self-reflectivity). The exact “objective morality” of each human is a complicated thing that I’m not even sure I grasp entirely and can describe adequately, but I’m quite certain that it is not limited to intuitions and that those intuitions are not entirely accurate.
Same problem. A thinks it is moral to kill B, B thinks it is not moral to be killed by A. Where is the objective moral fact there? (...)
The “objective moral fact” (to use your words), in this toy problem, is that IF AND ONLY IF A is correct when A thinks it is moral for A’s morality system to kill B, and B is correct when B thinks it is moral for B’s system to not be killed by A, then and only then it is moral for A to kill B and it is moral for B to not be killed by A. There is no contradictions, the universe is just fucked up and lets shit like this happen.
(...) Objective moral facts (or at least intersubjective ones) need to resolve conflicts between individuals. You have offered nothing that can do that.. Morality cannot just be a case of what an individual should do, because indiviuals interact.
What? No. First, that’s called ethics, the thing about how individuals should interact. The reason ethics is hard is because each individual has a slightly different morality, but the reason it’s feasible at all is because most humans are fairly similar even in this.
Most humans, when faced with the toy problem of saving ten young lives versus three old ones, will save the ten young. Most humans, when they see a child get horribly mutilated or have their flesh melt off of their bones, will be revolted and feel that this is many kinds of Very Wrong.
For most humans, if they have a small something they value a little bit, but that if they give it up temporarily they know they can make another human’s morality become much much better, while if they stick to keeping their small something to themselves that human will feel horribly wronged, will give up that little bit for the benefit of the other human’s morality.
This seems to indicate that most humans have a component, somewhere in this bundle of synapses, that tries to estimate what the other bundles of synapses in other brains are doing, so as to not upset them too much. This is also part of what helps ethics be feasible at all.
Then morlaity is not so objective that it is graven into the very fabric of the universe. The problem remains that what you have presented is too subjective to do anything useful. By all means present a theory of human morality that is indexed to humans, but let it regulate interactions between humans.
I don’t even understand what you’re getting at. I’m not trying to come up with a system of norms that tells everyone what they should do to interact with other humans. How is it too subjective to be useful?
I’ve merely presented my current conclusions, the current highest-probability results of computing together all the evidence available to me. These are guesses and tentative assessments of reality, an attempt at approximating and describing what actually goes on out there in human brains that gives rise to humans talking about morality and not wanting to coat children with burning napalm. (sorry if this strikes political chords, I can’t think of a better example of something public-knowledge that the vast majority of humans who learned about it described as clearly wrong)
As for being “too subjective to do anything useful”… what? If I tell you that two cars have different engines, so you can’t use the exact same mathematical formula for calculating their velocity and traveled distance as they accelerate, is this useless subjective information? Because what I’m saying is that humans have different engines in terms of morality, and while like the car engines they have major similarities in the logical principles involved and how they operate, there are key differences that must be taken into consideration to produce any useful discussion about the velocities and positions of each car.
That is hard to inpterpret. Why should opinions be what is “objectively moral”? You might mean there is nothing more to morality than people’s jugements about what is good or bad, but that is not an objective feature of the universe, it is mind projection. That the neural mechanisms involved are objective does not make what is projected by them objective. If objective neural activity makes me dream of unicorns, unicorns are not thereby objective.
Apologies for being unclear. Opinions are not what is objectively moral, I was saying that the bundle of synapses I described above is both the main part of what is objectively moral (well, the algorithms implemented by the synapses anyway), and what comes out of the bundle of synapses is also what generates the opinions. They are correlated, but not perfectly so, let alone equivalent/equal.
So more often than not, one’s opinion that it is wrong to suddenly start killing and pillaging everyone in the nearest city is a correct assessment about their own morality. On average, most clear-cut moral judgments will be fairly accurate, because they come out of the same algorithms in different manners.
The latter two sentences of this last quote seem to aptly rephrase exactly what I was trying to say. The are objective algorithms and mechanisms in the bundles of nerves, but just because the conscious mind is getting a rough idea of what it thinks they might be doing after having a “KILLING CHILD IS WRONG” output a hundred times, the output still doesn’t have access to the whole thing, and even if it did there are things one would want to correct in order to avoid errors due to bias.
I can’t really be more precise or confident in exactly what is morality in a human’s brain, because I haven’t won five nobels in breakthrough neurobiology, philosophy, peace, ethics and psychology. I think that’s about the minimum award that would go to someone who had entirely solved and located exactly everything that makes humans moral and exactly how it works.
“We” individually, or “we” collectively? That is a very important point to skate over.
The ambiguity is appropriate, though unintentional. The first response is “we” individually, but to some extent there are many things that all humans find moral, and many more things that most humans find moral. Again the example of napalm-flavored youngsters.
So each of us has a separate algorithm, but if you were to examine them all individually, you could probably (with enough effort and smarts) come up with an algorithm that finds moral only what all humans find moral, or finds moral whatever at least 60% of humans find moral, or some other filtering or approximation.
To example you, “2x − 6” will return a positive number as long as x > 3 (let’s not count zero). Similarly, “3x − 3“ will return positive as long as x > 1. If positive numbers represent a “This is moral and good” output, then clearly they’re not the same morality. However, “x > 3” will guarantee a space of solutions that both moralities find moral and favorable.
(two-part comment, see above or below for the rest)
The exact “objective morality” of each human is a complicated thing that I’m not even sure I grasp entirely and can describe adequately, but I’m quite certain that it is not limited to intuitions and that those intuitions are not entirely accurate
That’s still not the point. The entire bundle still isn’t Objective Morality, because the entire bundle is still insie one person’s head. Objective morality is what all ideal agents would converge on.
The “objective moral fact” (to use your words), in this toy problem, is that IF AND ONLY IF A is correct when A thinks it is moral for A’s morality system to kill B, and B is correct when B thinks it is moral for B’s system to not be killed by A, then and only then it is moral for A to kill B and it is moral for B to not be killed by A. There is no contradictions, the universe is just fucked up and lets shit like this happen.
The way you have expressed this is contradiictory. You said “it is moral”, simpliciter, rather than, it is moral-for-A, but immora-for-B. Although to do that would have made ii obvious you are talking about subjective morality. And no, it isn’t the universes fault fault. The universe allows agents to have contradictory and incompatible impulses, but it is you choice to call those implulses “moral” despite the fact that they don’t
resole conflicts, or take others’ interestes into account. I wouldn’t call them that. I think the contraiction means
at least one of the agent’s I-think-this-is-moral beliefs is wrong
What? No. First, that’s called ethics, the thing about how individuals should interact
I don’t think so
Ethics ” Moral principles that govern a person’s or group’s behavior.”
“1.
( used with a singular or plural verb ) a system of moral principles: the ethics of a culture.
2.
the rules of conduct recognized in respect to a particular class of human actions or a particular group, culture, etc.: medical ethics; Christian ethics.
3.
moral principles, as of an individual: His ethics forbade betrayal of a confidence.
4.
( usually used with a singular verb ) that branch of philosophy dealing with values relating to human conduct, with respect to the rightness and wrongness of certain actions and to the goodness and badness of the motives and ends of such actions. ”
I don’t even understand what you’re getting at. I’m not trying to come up with a system of norms that tells everyone what they should do to interact with other humans.
Then what are you doing? The observation that facts about brains a relevant to descriptive ethics is rather obvious.
If I tell you that two cars have different engines, so you can’t use the exact same mathematical formula for calculating their velocity and traveled distance as they accelerate, is this useless subjective information?
If you allow indiiviudal drivers to choose which side of the road to drive on, you have a uselessly subjective system of traffic law.
So more often than not, one’s opinion that it is wrong to suddenly start killing and pillaging everyone in the nearest city is a correct assessment about their own morality.
Their own something. I don’t think you are going to convince an error theorist that morality exists by showing
them brain scans. And the terms “consicience” and “superego” cover internal regulation of behaviour without prejudice to the philosophical issues.
So each of us has a separate algorithm, but if you were to examine them all individually, you could probably (with enough effort and smarts) come up with an algorithm that finds moral only what all humans find moral, or finds moral whatever at least 60% of humans find moral, or some other filtering or approximation.
Has no bearing on the philosophy, again. All you have their is the intersection of a set of tablets.
That’s still not the point. The entire bundle still isn’t Objective Morality, because the entire bundle is still insie one person’s head. Objective morality is what all ideal agents would converge on.
Okay. That is clearly a word problem, and you are arguing my definition.
The way you have expressed this is contradiictory. You said “it is moral”, simpliciter, rather than, it is moral-for-A, but immora-for-B. Although to do that would have made ii obvious you are talking about subjective morality. And no, it isn’t the universes fault fault. The universe allows agents to have contradictory and incompatible impulses, but it is you choice to call those implulses “moral” despite the fact that they don’t resole conflicts, or take others’ interestes into account. I wouldn’t call them that. I think the contraiction means at least one of the agent’s I-think-this-is-moral beliefs is wrong
You assumed I was being deliberately sophistic and creating confusion on purpose. After I explicitly requested twice that things be interpreted the other way around where possible. I thought that it was very clear from context that what I meant was that:
IFF It is moral-A that A kills B && It is moral-B that B is not killed by A && There are no other factors influencing moral-A or moral-B THEN: It is moral for A that A kills B and it is likewise moral for B to not be killed by A. Let the fight begin.
If you allow indiiviudal drivers to choose which side of the road to drive on, you have a uselessly subjective system of traffic law.
Please stop this. I’m seeing more and more evidence that you’re deliberately ignoring my arguments and what I’m trying to say, and that you’re just equating everything I say with “This is not a perfect system of normative ethics, therefore it is worthless”.
I have a hard time even inferring what you mean by this rather irrelevant-seeming metaphor. I’m not talking about laws and saying “The law should only punish those that act against their intuitions of morality, oh derp!”—I’m not even talking about justice or legal systems or ideal societies at all! Have I somewhere accidentally made the claim that we should just let every single human build their own model of their own system of morality with incomplete information and let chaos ensue?
Their own something. I don’t think you are going to convince an error theorist that morality exists by showing them brain scans. And the terms “consicience” and “superego” cover internal regulation of behaviour without prejudice to the philosophical issues.
Yes. And in case that wasn’t painfully obvious yet, this “something” of their own is exactly what I mean to say when I use the word “morality”!
I’m not attempting to convince anyone that “morality” “exists”. To engage further on this point I would necessitate those two to be tabooed, because I honestly have no idea what you’re getting at or what you even mean by that sentence or the one after it.
Has no bearing on the philosophy, again. All you have their is the intersection of a set of tablets.
Yup. If I agree to use your words, then yes. There’s an intersection of a set of tablets. These tablets give us some slightly iffy commandments that even the owner of the tablet would want to fix. The counterfactual edited version of the tablet after the owner has made the fixes, checked again to see if they want to fix anything, and are happy with the result, is exactly what I am pointing at here. I’ve used the words “objective morality” and “true moral preferences” and “moral algorithms” before, and all of those were pointing exactly at this. Yes, I claim that there’s nothing else here, move along.
If you want to have something more, some Objective Morality (in the sense you seem to be using that term) from somewhere else, humans are going to have to invent it. And either it’s going to be based on an intersection of edited tablets, or a lot of people are going to be really unhappy.
That is clearly a word problem, and you are arguing my definition.
I can see that it is a word problem, and I woud argue that anyone would be hard pressed to guess what you meant by “objective moral facts”.
It is moral for A that A kills B and it is likewise moral for B to not be killed by A. Let the fight begin.
What fight? You have added the “for A” and “for B” clauses that were missing last time. Are you hilding me to blame for taking you at your word?
Really? You’re going there?
You claimed a distinction in meaning between “morality” and “ethics” that doesn’t exist. Pointing that out
is useful for clarity of communication. It was not intended to prove anything at the object level.
I have a hard time even inferring what you mean by this rather irrelevant-seeming metaphor. I’m not talking about laws and saying “The law should only punish those that act against their intuitions of morality, oh derp!”—I’m not even talking about justice or legal systems or ideal societies at all! Have I somewhere accidentally made the claim that we should just let every single human build their own model of their own system of morality with incomplete information and let chaos ensue?
I don’t know how accidental it was , but your “moral for A” and “moral for B” comment does suggest that
two people can in contradiciton and yet both right.
Yes. And in case that wasn’t painfully obvious yet, this “something” of their own is exactly what I mean to say when I use the word “morality”!
I am totally aware of that. But you don’t get to call anything by any word. I was challenging the appriopriateness of making substantive claims based on a naming ceremony.
I’m not attempting to convince anyone that “morality” “exists”.
You said there were objective facts about it!
Yup. If I agree to use your words, then yes. There’s an intersection of a set of tablets. These tablets give us some slightly iffy commandments that even the owner of the tablet would want to fix. The counterfactual edited version of the tablet after the owner has made the fixes, checked again to see if they want to fix anything, and are happy with the result, is exactly what I am pointing at here. I’ve used the words “objective morality” and “true moral preferences” and “moral algorithms” before, and all of those were pointing exactly at this. Yes, I claim that there’s nothing else here, move along.
You haven’t explained that or how or why different individuals would converge on a single objective reality
by refining their intuitions. And no, EY doesn’t either.
If you want to have something more, some Objective Morality (in the sense you seem to be using that term) from somewhere else, humans are going to have to invent it.
if they haven’t already.
And either it’s going to be based on an intersection of edited tablets, or a lot of people are going to be really unhappy.
So values and intuitions are a necessary ingredient. Any number of others could be as well.
I can see that it is a word problem, and I woud argue that anyone would be hard pressed to guess what you meant by “objective moral facts”.
If individual moralities have enough of a common component that we can point to principles and values that are widely-shared among living people and societies, that would certainly count as a “fact” about morality, which we could call a “moral fact”. And that fact is certainly “objective” from the POV of any single individual, although it’s not objective at all in the naïve Western sense of “objectivity” or God’s Eye View.
You claimed a distinction in meaning between “morality” and “ethics” that doesn’t exist.
Dictionary definitions are worthless, especially in specialized domains. Does a distinction between “morality” and “ethics” (or even between “descriptive morality” and “normative morality”, if you’re committed to hopelessly confused and biased naming choices by academic philosophers) cut reality at its joints? I maintain that it does.
If individual moralities have enough of a common component that we can point to principles and values that are widely-shared among living people and societies, that would certainly count as a “fact” about morality, which we could call a “moral fact”. And that fact is certainly “objective” from the POV of any single individual, although it’s not objective at all in the naïve Western sense of “objectivity” or God’s Eye View.
And it is stll not an objective moral fact in the sense of Moral Objectivism, in the sense of a first-order fact
that makes some moral propositions mind independently true. It’s a second order fact.
Dictionary definitions are worthless, especially in specialized domains.
I’ve never seen that distinction in the specialised domain in question.
And it is stll not an objective moral fact in the sense of Moral Objectivism, in the sense of a first-order fact that makes some moral propositions mind independently true.
I don’t think that’s a coincidence. Whether there is some kind of factual (e.g. biological) base for morality is an interesting question, but it’s generally a question for psychology and science, not philosophy. People who try to argue for such a factual basis in a naïve way usually end up talking about something very different than what we actually mean by “morality” in the real world. For an unusually clear example, see Ayn Rand’s moral theory, incidentally also called “Objectivism”.
Just got bashed several times, while presenting the fragility of values idea in Oxford, for using the term “descriptive morality”. I was almost certain Eliezer used the term, hence, I was blaming him for my bashing. But it seems he doesn’t, and the above comment is the solely instance of the term I could find. I’m blaming you them! Not really though, it seems I’ve invented this term on my own—and I’m not proud of it. So far, I’ve failed to find a correlated term either in meta-ethics or in the Sequences. In my head, I was using it to mean what would be the 0 step for CEV. It could be seen as the object of study of descriptive ethics (a term that does exist), but it seems descriptive ethics uses a pluralistic or relativistic view, while I needed a term to describe the morality shared by all humans.
Just got bashed several times, while presenting the fragility of values idea in Oxford, for using the term “descriptive morality”.
So it’s even worse than I thought? When ethicists do any “descriptive” research, they are studying morality, whether they care to admit it or not. The problem with calling such things “ethics” is not so much that it implies a pluralist/relativist view—if anything, it makes the very opposite mistake: it does not take moralities seriously enough, as they exist in the real world. In common usage, the term “ethics” is only appropriate for very broadly-shared values (of course, whether such values exist after all is an empirical question), or else for the kind of consensus-based interplay of values or dispute resolution that we all do when we engage in ethical (or even moral!) reasoning in the real world.
And that fact is certainly “objective” from the POV of any single individual, although it’s not objective at all in the naïve Western sense of “objectivity” or God’s Eye View.
Sooo, not objective then. Definition debates are stupid, but there is no reason at all to be this loose with language. Seriously, this reads like a deconstructionist critique of a novel from an undergraduate majoring in English. Complete with scare quotes around words that are actually terms of art.
Well, yes. I’m using scare quotes around the terms “objective” and “fact”, precisely to point out that I am using them in a more general way than the term of art is usually defined. Nonetheless, I think this is useful, since it may help dissolve some philosophical questions and perhaps show them to be ill-posed or misleading.
Needless to say, I do not think this is “being loose with language”. And yes, sometimes I adopt a distinctive writing style in order to make a point as clearly as possible.
But those “objective” facts would only be about the intuitions of individual minds,
Same problem. A thinks it is moral to kill B, B thinks it is not moral to be killed by A. Where is the objective moral fact there? Objective moral facts (or at least intersubjective ones) need to resolve conflicts between individuals. You have offered nothing that can do that.. Morality cannot just be a case of what an individual should do, because indiviuals interact.
Then morlaity is not so objective that it is graven into the very fabric of the universe. The problem remains that what you have presented is too subjective to do anything useful. By all means present a theory of human morality that is indexed to humans, but let it regulate interactions between humans.
That is hard to inpterpret. Why should opinions be what is “objectively moral”? You might mean there is nothing more to morality than people’s jugements about what is good or bad, but that is not an objective feature of the universe, it is mind projection. That the neural mechanisms involved are objective does not make what is projected by them objective. If objective neural activity makes me dream of unicorns, unicorns are not thereby objective.
And in any case, what is important is co-ordinating the judgements of individuals in the case of conflict.
“We” individually, or “we” collectively? That is a very important point to skate over.
THat seems to be saying that it is instrumentally in people’s interests to be moral. But if that were always straightforwardly the case, then there would be no issues of sacrifices and self-restraint involve in morality, which is scarcely credible. If I lay down my life for my country, that might lead to the greater good, but how good is it for me? The issue is much more complex than you have stated.
(part 2 of two-part response, see below or above for the first)
See this later comment but this one especially (the first is mostly for context) to see that I do indeed take that into account.
The key point is that “morality” isn’t straightforwardly “what people want” at all. What people consider moral when they evaluate all the information available to them and what people actually do (even with that information available) are often completely different things.
Note also that context and complicated conditionals become involved in Real Issues™. To throw out a toy example:
Julie might find it moral to kill three humans because she values the author of this post saying “Shenanigans” out loud only a bit less than their lives, and the author has committed to saying it three times out loud for each imaginary person dead in this toy example. However, Jack doesn’t want those humans dead, and has credibly signaled that he will be miserable forever if those three people die. Jack also doesn’t care about me saying “Shenanigans”.
Thus, because Julie cares about Jack’s morality (most humans, I assume, have values in their morality for “what other people of my tribe consider moral or wrong”), she will “make a personal sacrifice and use self-restrain” to not kill the three nameless, fortunate toy humans. The naive run of her morality over the immediate results says “Bah! Things could have been more fun.”, but game-theoretically she gains an advantage in the long term—Jack now cooperates with her, which means she incurs far less losses overall and still gains some value from her own people-alive moral counter and from Jack’s people-alive moral counter as well.
I think you are vastly confusing “good”, “greater good”, and “good for me”. These need to be tabooed and reduced. Again, example time:
Tom the toy soldier cares about his life. Tom cares about the lives of his comrades. Tom cares about the continuation of the social system that can be summarized as “his country”.
If Tom dies without any reason or effect, this is clearly bad. However, Tom values the end of his country as 1⁄2 of his life. So far, he’s still not going to die for it. Tom also values each comrade life at 1/10th of his life. Still not going to die for his country. Tom also knows that the end of his country means 95% chance that 200 of his comrades will die, with the other 5% they all live. If the country does not end, there’s a 50% chance that 100 of his comrades will die anyway, with 50% they live.
If Tom lives, there is 95% chance (as far as Tom knows, to his evidence, etc. etc.) that the country will end. If Tom sacrifices himself, the country is saved (with “certainty”, usual disclaimers etc. etc.).
So if Tom lives, Tom’s values go to −1/2 plus .95 chance of .95 chance of −20. If Tom sacrifices himself, the currently-alive Tom values this at −1 plus .5 chance of −10. Values are in negative utility only for simplicity of calculation, but this could be described at length in any other system you want (with a bit more effort though).
So the expected utility comes out at −18.55 if Tom lives, and −6 if Tom sacrifices himself, since Tom is a magical toy human and isn’t biased in any way and always shuts up and calculates and always knows exactly his own morality. So knowing all of this, Tom lays down his life for his country and what he would think of as “the greater good”.
I really don’t see how I’ve excluded this or somehow claimed that all of this was magically whisked away by any of what I said.
Overall, I think the only substantive disagreement we had is in your assessment that I didn’t think of / say anything useful towards solving interpersonal moral conflicts (I’m pretty sure I did, but mostly implicitly). I think the issue of what “morality” is for is entirely an empty word problem and should be ignored.
I’ll gladly attempt to reduce or taboo for reasonable requests to do so. If you think there are other issues we disagree on, I’d like them to be said. However, I would much appreciate efforts to avoid logical rudeness, and would also greatly appreciate if in further responses you (or anyone else replying) assumed that I haven’t thought through this only at the single-tier, naive level without giving this much more than five minutes of thought.
Or, to rephrase positively: Please assume you’re speaking to someone who has thought of most of the obvious implications, has thought about this for a very considerable amount of time, has done some careful research, and thinks that this all adds up to normality.
Tom will sacrifice himself if his values lead him too, and not if they don’t. He might desert or turn traitor. You would still call that all moral because it is an output of the neurological module you have labelled “moral”.
I think it isn’t. If someone tries to persuade you that you are wrong about morality, it is useful to consider the “what is morality for” question.
Do you think any of this adds up to any extent of a solution to the philosphical problems of morality/ethics?
Yes!
.
(this space intentionally left blank)
.
.
What specific philosophical problems? Because yes, it does help me clarify my thoughts and figure out better methods of arriving at solutions.
Does it directly provide solutions to some as-yet-unstated philosophical problems? Well, probably not, since the search space of possible philosophical problems related to morality or ethics is pretty, well, huge. The odds that my current writings provide a direct solution to any given random one of them are pretty low.
If the question is whether or not my current belief network contains answers to all philosophical problems pertaining to morality and ethics, then a resounding no. Is it flabbergasted by many of the debates and many of the questions still being asked, and does it consider many of them mysterious and pointless? A resounding yes.
Consequentualism versus deontology, objectivism versus subjectivism, as in the context.
Any would be good Metaethics is sometimes touted as a solve problem on LW.
Oh. Yep.
As I said originally, both of those “X versus Y” and many others are just confusing and mysterious-sounding to me.
They seem like the difference between Car.Accelerate() and AccelerateObject(Car) in programming. Different implementations, some slightly more efficient for some circumstances than others, and both executing the same effective algorithm—the car object goes faster.
Oh. Well, yeah, it does sound kind-of solved.
Judging by the wikipedia description of “meta-ethics” and the examples it gives, I find the meta-ethics sequence on LW gives me more than satisfactory answers to all of those questions.
You previously said something much more definite-sounding:
“I believe that there is an objective system of verifiable, moral facts which can be true or false”
..although it has turned out you meant something like “there are objective facts about de facto moral reasoning”.
The alleged solution seems as elusive as the Snark to me.
You seem to misunderstand most of my beliefs, so I’ll try to address that first before I go any further to avoid confusion.
No. Just no. No no no no no no no no no no no no no. NO! NO!
The objective fact is that there is a brain made mostly of neurons and synapses and blood and other kinds of juicy squishyness inside which a certain bundle of those synapses is set in a certain particularly complex (as far as we know) arrangement, and when something is sent as input to that bundle of synapses of the form “Kill this child?”, the bundle sends queries to other bundles: “Benefits?” “People who die if child lives?” “Hungry?” “Have we had sex recently?” “Is the child real?” etc.
Then, an output is produced, “KILLING CHILD IS WRONG” or “KILLING CHILD IS OKAY HERE”.
Human consciousnesses, the “you” that is you and that wouldn’t randomly decide to start masturbating in public while sleepwalking (you don’t want to be the guy whom that happened to, seriously), doesn’t have access to the whole thing that the bundle of synapses called “morality” inside the brain actually does. It only has output, and sometimes glimpses of some of the queries that the bundle sent to other bundles.
In other words, intuitions.
What I refer to as an “objective fact”, the “objective” morality of that individual, is the entire sum of the process, the entire bundle + reviewing by conscious mind on each individual process + what the conscious mind would want to fix in order to be even more moral by the morals of the same bundle of synapses (i.e. self-reflectivity). The exact “objective morality” of each human is a complicated thing that I’m not even sure I grasp entirely and can describe adequately, but I’m quite certain that it is not limited to intuitions and that those intuitions are not entirely accurate.
The “objective moral fact” (to use your words), in this toy problem, is that IF AND ONLY IF A is correct when A thinks it is moral for A’s morality system to kill B, and B is correct when B thinks it is moral for B’s system to not be killed by A, then and only then it is moral for A to kill B and it is moral for B to not be killed by A. There is no contradictions, the universe is just fucked up and lets shit like this happen.
What? No. First, that’s called ethics, the thing about how individuals should interact. The reason ethics is hard is because each individual has a slightly different morality, but the reason it’s feasible at all is because most humans are fairly similar even in this.
Most humans, when faced with the toy problem of saving ten young lives versus three old ones, will save the ten young. Most humans, when they see a child get horribly mutilated or have their flesh melt off of their bones, will be revolted and feel that this is many kinds of Very Wrong.
For most humans, if they have a small something they value a little bit, but that if they give it up temporarily they know they can make another human’s morality become much much better, while if they stick to keeping their small something to themselves that human will feel horribly wronged, will give up that little bit for the benefit of the other human’s morality.
This seems to indicate that most humans have a component, somewhere in this bundle of synapses, that tries to estimate what the other bundles of synapses in other brains are doing, so as to not upset them too much. This is also part of what helps ethics be feasible at all.
I don’t even understand what you’re getting at. I’m not trying to come up with a system of norms that tells everyone what they should do to interact with other humans. How is it too subjective to be useful?
I’ve merely presented my current conclusions, the current highest-probability results of computing together all the evidence available to me. These are guesses and tentative assessments of reality, an attempt at approximating and describing what actually goes on out there in human brains that gives rise to humans talking about morality and not wanting to coat children with burning napalm. (sorry if this strikes political chords, I can’t think of a better example of something public-knowledge that the vast majority of humans who learned about it described as clearly wrong)
As for being “too subjective to do anything useful”… what? If I tell you that two cars have different engines, so you can’t use the exact same mathematical formula for calculating their velocity and traveled distance as they accelerate, is this useless subjective information? Because what I’m saying is that humans have different engines in terms of morality, and while like the car engines they have major similarities in the logical principles involved and how they operate, there are key differences that must be taken into consideration to produce any useful discussion about the velocities and positions of each car.
Apologies for being unclear. Opinions are not what is objectively moral, I was saying that the bundle of synapses I described above is both the main part of what is objectively moral (well, the algorithms implemented by the synapses anyway), and what comes out of the bundle of synapses is also what generates the opinions. They are correlated, but not perfectly so, let alone equivalent/equal.
So more often than not, one’s opinion that it is wrong to suddenly start killing and pillaging everyone in the nearest city is a correct assessment about their own morality. On average, most clear-cut moral judgments will be fairly accurate, because they come out of the same algorithms in different manners.
The latter two sentences of this last quote seem to aptly rephrase exactly what I was trying to say. The are objective algorithms and mechanisms in the bundles of nerves, but just because the conscious mind is getting a rough idea of what it thinks they might be doing after having a “KILLING CHILD IS WRONG” output a hundred times, the output still doesn’t have access to the whole thing, and even if it did there are things one would want to correct in order to avoid errors due to bias.
I can’t really be more precise or confident in exactly what is morality in a human’s brain, because I haven’t won five nobels in breakthrough neurobiology, philosophy, peace, ethics and psychology. I think that’s about the minimum award that would go to someone who had entirely solved and located exactly everything that makes humans moral and exactly how it works.
The ambiguity is appropriate, though unintentional. The first response is “we” individually, but to some extent there are many things that all humans find moral, and many more things that most humans find moral. Again the example of napalm-flavored youngsters.
So each of us has a separate algorithm, but if you were to examine them all individually, you could probably (with enough effort and smarts) come up with an algorithm that finds moral only what all humans find moral, or finds moral whatever at least 60% of humans find moral, or some other filtering or approximation.
To example you, “2x − 6” will return a positive number as long as x > 3 (let’s not count zero). Similarly, “3x − 3“ will return positive as long as x > 1. If positive numbers represent a “This is moral and good” output, then clearly they’re not the same morality. However, “x > 3” will guarantee a space of solutions that both moralities find moral and favorable.
(two-part comment, see above or below for the rest)
That’s still not the point. The entire bundle still isn’t Objective Morality, because the entire bundle is still insie one person’s head. Objective morality is what all ideal agents would converge on.
The way you have expressed this is contradiictory. You said “it is moral”, simpliciter, rather than, it is moral-for-A, but immora-for-B. Although to do that would have made ii obvious you are talking about subjective morality. And no, it isn’t the universes fault fault. The universe allows agents to have contradictory and incompatible impulses, but it is you choice to call those implulses “moral” despite the fact that they don’t resole conflicts, or take others’ interestes into account. I wouldn’t call them that. I think the contraiction means at least one of the agent’s I-think-this-is-moral beliefs is wrong
I don’t think so
Ethics ” Moral principles that govern a person’s or group’s behavior.” “1. ( used with a singular or plural verb ) a system of moral principles: the ethics of a culture. 2. the rules of conduct recognized in respect to a particular class of human actions or a particular group, culture, etc.: medical ethics; Christian ethics. 3. moral principles, as of an individual: His ethics forbade betrayal of a confidence. 4. ( usually used with a singular verb ) that branch of philosophy dealing with values relating to human conduct, with respect to the rightness and wrongness of certain actions and to the goodness and badness of the motives and ends of such actions. ”
Then what are you doing? The observation that facts about brains a relevant to descriptive ethics is rather obvious.
If you allow indiiviudal drivers to choose which side of the road to drive on, you have a uselessly subjective system of traffic law.
Their own something. I don’t think you are going to convince an error theorist that morality exists by showing them brain scans. And the terms “consicience” and “superego” cover internal regulation of behaviour without prejudice to the philosophical issues.
Has no bearing on the philosophy, again. All you have their is the intersection of a set of tablets.
Okay. That is clearly a word problem, and you are arguing my definition.
You assumed I was being deliberately sophistic and creating confusion on purpose. After I explicitly requested twice that things be interpreted the other way around where possible. I thought that it was very clear from context that what I meant was that:
IFF It is moral-A that A kills B
&& It is moral-B that B is not killed by A
&& There are no other factors influencing moral-A or moral-B
THEN:
It is moral for A that A kills B and it is likewise moral for B to not be killed by A. Let the fight begin.
Really? You’re going there?
Please stop this. I’m seeing more and more evidence that you’re deliberately ignoring my arguments and what I’m trying to say, and that you’re just equating everything I say with “This is not a perfect system of normative ethics, therefore it is worthless”.
I have a hard time even inferring what you mean by this rather irrelevant-seeming metaphor. I’m not talking about laws and saying “The law should only punish those that act against their intuitions of morality, oh derp!”—I’m not even talking about justice or legal systems or ideal societies at all! Have I somewhere accidentally made the claim that we should just let every single human build their own model of their own system of morality with incomplete information and let chaos ensue?
Yes. And in case that wasn’t painfully obvious yet, this “something” of their own is exactly what I mean to say when I use the word “morality”!
I’m not attempting to convince anyone that “morality” “exists”. To engage further on this point I would necessitate those two to be tabooed, because I honestly have no idea what you’re getting at or what you even mean by that sentence or the one after it.
Yup. If I agree to use your words, then yes. There’s an intersection of a set of tablets. These tablets give us some slightly iffy commandments that even the owner of the tablet would want to fix. The counterfactual edited version of the tablet after the owner has made the fixes, checked again to see if they want to fix anything, and are happy with the result, is exactly what I am pointing at here. I’ve used the words “objective morality” and “true moral preferences” and “moral algorithms” before, and all of those were pointing exactly at this. Yes, I claim that there’s nothing else here, move along.
If you want to have something more, some Objective Morality (in the sense you seem to be using that term) from somewhere else, humans are going to have to invent it. And either it’s going to be based on an intersection of edited tablets, or a lot of people are going to be really unhappy.
I can see that it is a word problem, and I woud argue that anyone would be hard pressed to guess what you meant by “objective moral facts”.
What fight? You have added the “for A” and “for B” clauses that were missing last time. Are you hilding me to blame for taking you at your word?
You claimed a distinction in meaning between “morality” and “ethics” that doesn’t exist. Pointing that out is useful for clarity of communication. It was not intended to prove anything at the object level.
I don’t know how accidental it was , but your “moral for A” and “moral for B” comment does suggest that two people can in contradiciton and yet both right.
I am totally aware of that. But you don’t get to call anything by any word. I was challenging the appriopriateness of making substantive claims based on a naming ceremony.
You said there were objective facts about it!
You haven’t explained that or how or why different individuals would converge on a single objective reality by refining their intuitions. And no, EY doesn’t either.
if they haven’t already.
So values and intuitions are a necessary ingredient. Any number of others could be as well.
If individual moralities have enough of a common component that we can point to principles and values that are widely-shared among living people and societies, that would certainly count as a “fact” about morality, which we could call a “moral fact”. And that fact is certainly “objective” from the POV of any single individual, although it’s not objective at all in the naïve Western sense of “objectivity” or God’s Eye View.
Dictionary definitions are worthless, especially in specialized domains. Does a distinction between “morality” and “ethics” (or even between “descriptive morality” and “normative morality”, if you’re committed to hopelessly confused and biased naming choices by academic philosophers) cut reality at its joints? I maintain that it does.
And it is stll not an objective moral fact in the sense of Moral Objectivism, in the sense of a first-order fact that makes some moral propositions mind independently true. It’s a second order fact.
I’ve never seen that distinction in the specialised domain in question.
I don’t think that’s a coincidence. Whether there is some kind of factual (e.g. biological) base for morality is an interesting question, but it’s generally a question for psychology and science, not philosophy. People who try to argue for such a factual basis in a naïve way usually end up talking about something very different than what we actually mean by “morality” in the real world. For an unusually clear example, see Ayn Rand’s moral theory, incidentally also called “Objectivism”.
Just got bashed several times, while presenting the fragility of values idea in Oxford, for using the term “descriptive morality”. I was almost certain Eliezer used the term, hence, I was blaming him for my bashing. But it seems he doesn’t, and the above comment is the solely instance of the term I could find. I’m blaming you them! Not really though, it seems I’ve invented this term on my own—and I’m not proud of it. So far, I’ve failed to find a correlated term either in meta-ethics or in the Sequences. In my head, I was using it to mean what would be the 0 step for CEV. It could be seen as the object of study of descriptive ethics (a term that does exist), but it seems descriptive ethics uses a pluralistic or relativistic view, while I needed a term to describe the morality shared by all humans.
So it’s even worse than I thought? When ethicists do any “descriptive” research, they are studying morality, whether they care to admit it or not. The problem with calling such things “ethics” is not so much that it implies a pluralist/relativist view—if anything, it makes the very opposite mistake: it does not take moralities seriously enough, as they exist in the real world. In common usage, the term “ethics” is only appropriate for very broadly-shared values (of course, whether such values exist after all is an empirical question), or else for the kind of consensus-based interplay of values or dispute resolution that we all do when we engage in ethical (or even moral!) reasoning in the real world.
Sooo, not objective then. Definition debates are stupid, but there is no reason at all to be this loose with language. Seriously, this reads like a deconstructionist critique of a novel from an undergraduate majoring in English. Complete with scare quotes around words that are actually terms of art.
Well, yes. I’m using scare quotes around the terms “objective” and “fact”, precisely to point out that I am using them in a more general way than the term of art is usually defined. Nonetheless, I think this is useful, since it may help dissolve some philosophical questions and perhaps show them to be ill-posed or misleading.
Needless to say, I do not think this is “being loose with language”. And yes, sometimes I adopt a distinctive writing style in order to make a point as clearly as possible.