First, re the suitability of (b) as a general criterion: if your theory rests on arbitrary principles, then you admit that it’s nothing more than a subjective guide… so then what’s the point of trying to argue for it? If at the end of the day it all comes down to personal preference, you might as well give up on the discussion, no?
With regards to liberty meeting that criterion, it is at least a fact on which everyone can agree that not everyone agrees on an absolute moral authority. So starting from this fact, we can derive the principle that nothing gives you the right to infringe on other people’s liberty. This doesn’t exactly presuppose a “fairness” principle—it’s sort of like bootstrapping: it just presupposes the absence of a right to harm others. I am not saying that not being violent is right; I am saying that being violent isn’t.
Your point on the fact that this theory leaves a lot of moral dilemmas uncovered, you are right. Sadly, I don’t have an answer to that. Perhaps I could add a 4th criterion, to do with completeness, but I suspect that no moral theory would meet all of the criteria. But to be clear here—you are not rejecting criterion a as far as I can tell; you are just saying it’s not sufficient, right?
As for your personal principle—I cannot say whether it meets criteria a and c because you have not provided enough details, e.g. how do you balance justice vs honesty vs liberty? If what you are saying is “it all comes down to the particular situation”, then you are not describing a moral theory but personal judgement.
But I appreciate the critique—my arguing back isn’t me blindly rejecting any counter-arguments!
Hey, I appreciate your ability to engage constructively with a critique of your views! Rare gift, that.
if your theory rests on arbitrary principles, then you admit that it’s nothing more than a subjective guide
As other people have pointed out, maybe we should consider here what we mean by “arbitrary”. In your initial statement you said that non-arbitrary was that which was derived logically from facts on which everyone agrees. So to avoid ambiguity maybe we should just say that criterion (b) is “the principle(s) of the moral system must be derived logically from facts on which everyone agrees”.
Now, there are no facts, as fact relates to this discussion, on which everyone agrees, and there never will be. There are, of course, facts, but among the seven-odd billion human inhabitants of the planet you will always find people to disagree with any of them. There are literally still people who think the sun revolves around the earth. I swear that’s not hyperbole—google “modern geocentrism”.
(By the way, you also said “if a moral theory rests on an arbitrary and subjective principle, the theory’s advocates will never be able to convince people who do not share that principle of their theory’s validity”—but millions of religious converts give the lie to that. Subjectivity is demonstrably no barrier to persuasion—not saying that’s a good thing but it’s a real thing.)
So say we cut (b) down to “logically derived from facts”. I think that’s useful. Facts are truly, objectively real, total consensus isn’t. But upon which facts, then, do we start to build our moral system? You state that your chosen basis is the fact that not everyone agrees about moral authority. As gjm pointed out, there seems to be a gap between “we humans can’t agree on what constitutes moral authority” and “nobody should impose their morality on any other person in a way that limits their freedom”. After all, despite our differing views on what is or is not moral, most people do believe in the basic idea that it’s justifiable to constrain the freedom of others in at least some situations.
But I’ll leave that bit aside for now to go back to the issue of fact as a basis for a moral system. Your fact isn’t the only fact. It’s also a fact that some people are physically stronger and smarter than others. Some people base their moral system on that fact, and say that might is right, the strong have an absolute right to rule the weak, will to power and so on and on. Douchetarians, basically. There are many facts upon which one could build a moral system. How do I pick one, with some defensible basis for my choice among many?
I take as my founding fact that fact which appears to be the most fundamental, the most basically applicable to humanity, the most basically applicable to life—that it wants to keep being alive. Find me a fact about humanity more bedrock-basic than that and I swear I’ll rethink my moral system.
This brings me back to criterion (a), consistency.
how do you balance justice vs honesty vs liberty? If what you are saying is “it all comes down to the particular situation”, then you are not describing a moral theory but personal judgement.
The principle—there is only one—is “what serves the species”. That is, what allows us to keep living with each other and co-operating with each other, because that’s necessary to our continued existence. Every other moral principle is a branch on that trunk. Honesty, justice, personal liberty, civic responsibility, mercy, compassion—we came up with those concepts, evolved them, because they can all be applied to meet the goal. So the non-subjective answer to “how do you balance principles in any given situation” is “what balance best serves the goal of keeping society ticking?”. Now that’s difficult to decide but there’s a major difference between an objectively correct answer that’s difficult to find and there being no objectively correct answer.
So do I reject criterion (a)? Not exactly. What I think is that by starting with the moral principles as a tool for moral choice-making you’re skipping a step. Why worry about making moral choices at all unless there’s some reason to do so? The first step is to define the goal to which making moral choices must tend. Once you define that, you can have multiple principles which may seem to be sometimes in conflict with each other—the consistency comes from the goal. The principles are to be applied in a way which is always consistent with meeting the goal. Now, some people say the goal is “maximize happiness”. You might say your goal falls somewhere in that band—or you might go all out and say the goal is “maximize freedom”, period. I say we can be neither happy nor free if we’re not here and if we’re not able to successfully live together we won’t be here. I say start at the start—keep ourselves existing, and then work in as much happiness and freedom as we can manage.
And just to be totally clear, I am saying that sometimes “maintaining personal liberty inviolate” is not the way to meet the goal “keep humanity existing”. “Disregard personal liberty and afford it no value” is also not the way to meet the goal. But “personal freedom entirely unrestricted” is simply not a survival strategy. Forget humans—chimps punish or prevent behaviors that endanger the group. Every social animal I’m aware of does. And for all our wonderful evolved brains and tools and self-awareness and power of language, that’s still what we are—social animals.
I am fine to tweak the definition of (b) to be facts-based as you say. And you are right to say that there may be many facts to choose from—I never said libertarianism is definitely the only possible theory to meet all criteria, just the only one I could come up with. So, yes, Douchetarianists, as you call them, could also claim that their theory meets (b), but I’d argue it fails to meet (c).
The problem with your moral theory, as I see it, is that it also fails to meet (c), because there could be many plausible, but horrific in my view, arguments you could make: e.g. that eugenics would improve the species’ odds of survival, as would assigning jobs to people based on how good they would be at them vs letting them choose for themselves &c.
The problem with your moral theory, as I see it, is that it also fails to meet (c), because there could be many plausible, but horrific in my view, arguments you could make [...]
I was expecting this response either from you or someone else, but didn’t want to make my previous comment too long (a habit of mine) by preempting it. It’s a totally valid next question, and I’ve considered it before.
Criterion (c) is that the principles of my moral system must not lead when taken to their logical extent to a society that I, the proponent of the system, would consider dystopian. The crux of my counter-argument is that most of what you’d consider horrific, I would also probably consider horrific, as would most people—and humans don’t do well in societies that horrify them. Taking any path that leads to a “dystopia” is inconsistent with the goal.
(I’m trying to prevent this comment from turning into a prohibitively massive essay so I’ll try to restrain myself and keep this broad—please feel free to request further detail about anything I say.)
Eugenics, first of all, doesn’t work. (I take you to mean “negative eugenics”—killing or sterilizing those you consider undesirable, rather than “positive eugenics”—tinkering with DNA to produce kids with traits you find desirable, which hasn’t really been tried and only very recently became a real possibility to consider.) We suck at guessing whether a given individual’s progeny will be “good humans” or not. Too many factors, too many ways a human can be valuable, and even then all you have is a baby with a good genetic start in life—there’s still all the “nurture” to come. It’s like herding cats with a blindfold on. I could go on for pages about all the ways negative eugenics doesn’t work—but say we were capable of making useful judgments about which humans would produce “bad” offspring. You’d then have to make the case that the principle “negative eugenics is fine to do” furthers the goal (helping humanity to survive) to such an extent that it outweighs the necessary hits taken by other goal-furthering principles like “don’t murder people”, “don’t maim people”, “don’t give too much power to too few people” and, on an even more basic level, “don’t suppress empathy”.
Do you and I consider negative eugenics “horrific” because we think we (or at least our genitals) would be on the chopping block? Probably not, though we might fear it a bit. It horrifies us because we feel empathy for those who would suffer it. Empathy is hard-wired in most people. Measure your brain activity while you watch me getting hit with a hammer and your pain centers will show activity. You can feel for me (though measurably less if we’re not the same race—these brains evolved in little tribes and are playing catch-up with the very recent states of national and global inter-dependence). Giving weight—a lot of weight—to principles protective or supportive of empathy is consistent with the goal because empathy helps us survive as a species. Numb or suppress it too much and we’re screwed. Run counter to it too much without successfully suppressing it and you’ve got a society full of horrified, outraged people. Not great for social co-operation.
Which brings me to your other example, assigning jobs based on ability without regard to choice. Again, won’t work. Gives you a society full of miserable resentful people who don’t give their forced-jobs the full passion or creativity of which they are capable, or actively direct their energies towards trying to get re-assigned to the job they want. Would go further into this but this is already too long!
I know those two were only examples on your part but my point is that the question “does this help humanity to survive” is always a case of trying to balance “does it help in this way to an extent that outweighs how it harms in these other ways”. That has to be taken into account when considering a “horrible scenario”. People having empathy—caring for and helping each other—helps us to survive. People being physically and mentally healthy (“happy” is a big part of both, by the way) helps. People having personal freedom to create and invent and try things helps. People being ambitious and competing and seeking to become better helps. We need principles that take all that value into account—and sometimes those principles are going to be up against each other and we have to look for the least-worst answer. It’s never simple, we get it wrong all the time, but we must deal with it. If morality was easy we wouldn’t have spent the last ten thousand years arguing about it.
Now, I noticed that elsewhere you said it was bothering you that people were going off on tangents to your main issues, so I’ll try to circle back to your original point. You’re trying to devise a framework for evaluating a moral system, and I do think your criteria raise some useful lines of inquiry, but I don’t see how it’s possible to “evaluate” something without expressing or defining what it is you want it to do. My evaluation of my hairdryer depends totally on whether I want it to dry my hair or tell me amusing anecdotes. Evaluation comes up “pretty good” on the former and “totally crap” on the latter. Now “figuring out a way to evaluate a moral system” is something I’m all for and the best help I have to give with that is to suggest that you define what it is you want a moral system to do first—a base on which build your evaluation framework.
[Edited to add: I got through two paragraphs on eugenics without bringing up the you-know-whozis! Where should I pick up my medal?]
By the way, what exactly do you mean by “arbitrary” and “non-arbitrary”? I am asking because the homo sapiens species itself is in some sense “arbitrary”—do we want the result to be equally attractive for humans and space spiders, or is it okay if it is merely attractive for humans?
My opinions is that while it would be a nice coincidence if my moral system would also happen to be attractive for the space spiders, I actually care about humans. Not even all humans equally, for example I wouldn’t care too much if e.g. psychopaths would decide that my moral system seems too arbitrary for them.
But then, considering the space spiders (and human spiders), there seem to be two different questions here: how would I define “morality” for agents more or less like myself, and how would I define “optimal rules for peaceful coexistence” for agents completely unlike myself.
I refuse to use the word “morality” for the latter, because depending on the nature of the space spiders, the optimal outcome could still be something quite horrifying for the average human. But in some sense it would be less “arbitrary”.
Thanks for your response!
First, re the suitability of (b) as a general criterion: if your theory rests on arbitrary principles, then you admit that it’s nothing more than a subjective guide… so then what’s the point of trying to argue for it? If at the end of the day it all comes down to personal preference, you might as well give up on the discussion, no?
With regards to liberty meeting that criterion, it is at least a fact on which everyone can agree that not everyone agrees on an absolute moral authority. So starting from this fact, we can derive the principle that nothing gives you the right to infringe on other people’s liberty. This doesn’t exactly presuppose a “fairness” principle—it’s sort of like bootstrapping: it just presupposes the absence of a right to harm others. I am not saying that not being violent is right; I am saying that being violent isn’t.
Your point on the fact that this theory leaves a lot of moral dilemmas uncovered, you are right. Sadly, I don’t have an answer to that. Perhaps I could add a 4th criterion, to do with completeness, but I suspect that no moral theory would meet all of the criteria. But to be clear here—you are not rejecting criterion a as far as I can tell; you are just saying it’s not sufficient, right?
As for your personal principle—I cannot say whether it meets criteria a and c because you have not provided enough details, e.g. how do you balance justice vs honesty vs liberty? If what you are saying is “it all comes down to the particular situation”, then you are not describing a moral theory but personal judgement.
But I appreciate the critique—my arguing back isn’t me blindly rejecting any counter-arguments!
Hey, I appreciate your ability to engage constructively with a critique of your views! Rare gift, that.
As other people have pointed out, maybe we should consider here what we mean by “arbitrary”. In your initial statement you said that non-arbitrary was that which was derived logically from facts on which everyone agrees. So to avoid ambiguity maybe we should just say that criterion (b) is “the principle(s) of the moral system must be derived logically from facts on which everyone agrees”.
Now, there are no facts, as fact relates to this discussion, on which everyone agrees, and there never will be. There are, of course, facts, but among the seven-odd billion human inhabitants of the planet you will always find people to disagree with any of them. There are literally still people who think the sun revolves around the earth. I swear that’s not hyperbole—google “modern geocentrism”.
(By the way, you also said “if a moral theory rests on an arbitrary and subjective principle, the theory’s advocates will never be able to convince people who do not share that principle of their theory’s validity”—but millions of religious converts give the lie to that. Subjectivity is demonstrably no barrier to persuasion—not saying that’s a good thing but it’s a real thing.)
So say we cut (b) down to “logically derived from facts”. I think that’s useful. Facts are truly, objectively real, total consensus isn’t. But upon which facts, then, do we start to build our moral system? You state that your chosen basis is the fact that not everyone agrees about moral authority. As gjm pointed out, there seems to be a gap between “we humans can’t agree on what constitutes moral authority” and “nobody should impose their morality on any other person in a way that limits their freedom”. After all, despite our differing views on what is or is not moral, most people do believe in the basic idea that it’s justifiable to constrain the freedom of others in at least some situations.
But I’ll leave that bit aside for now to go back to the issue of fact as a basis for a moral system. Your fact isn’t the only fact. It’s also a fact that some people are physically stronger and smarter than others. Some people base their moral system on that fact, and say that might is right, the strong have an absolute right to rule the weak, will to power and so on and on. Douchetarians, basically. There are many facts upon which one could build a moral system. How do I pick one, with some defensible basis for my choice among many?
I take as my founding fact that fact which appears to be the most fundamental, the most basically applicable to humanity, the most basically applicable to life—that it wants to keep being alive. Find me a fact about humanity more bedrock-basic than that and I swear I’ll rethink my moral system.
This brings me back to criterion (a), consistency.
The principle—there is only one—is “what serves the species”. That is, what allows us to keep living with each other and co-operating with each other, because that’s necessary to our continued existence. Every other moral principle is a branch on that trunk. Honesty, justice, personal liberty, civic responsibility, mercy, compassion—we came up with those concepts, evolved them, because they can all be applied to meet the goal. So the non-subjective answer to “how do you balance principles in any given situation” is “what balance best serves the goal of keeping society ticking?”. Now that’s difficult to decide but there’s a major difference between an objectively correct answer that’s difficult to find and there being no objectively correct answer.
So do I reject criterion (a)? Not exactly. What I think is that by starting with the moral principles as a tool for moral choice-making you’re skipping a step. Why worry about making moral choices at all unless there’s some reason to do so? The first step is to define the goal to which making moral choices must tend. Once you define that, you can have multiple principles which may seem to be sometimes in conflict with each other—the consistency comes from the goal. The principles are to be applied in a way which is always consistent with meeting the goal. Now, some people say the goal is “maximize happiness”. You might say your goal falls somewhere in that band—or you might go all out and say the goal is “maximize freedom”, period. I say we can be neither happy nor free if we’re not here and if we’re not able to successfully live together we won’t be here. I say start at the start—keep ourselves existing, and then work in as much happiness and freedom as we can manage.
And just to be totally clear, I am saying that sometimes “maintaining personal liberty inviolate” is not the way to meet the goal “keep humanity existing”. “Disregard personal liberty and afford it no value” is also not the way to meet the goal. But “personal freedom entirely unrestricted” is simply not a survival strategy. Forget humans—chimps punish or prevent behaviors that endanger the group. Every social animal I’m aware of does. And for all our wonderful evolved brains and tools and self-awareness and power of language, that’s still what we are—social animals.
Thanks for the continuing dialogue!
I am fine to tweak the definition of (b) to be facts-based as you say. And you are right to say that there may be many facts to choose from—I never said libertarianism is definitely the only possible theory to meet all criteria, just the only one I could come up with. So, yes, Douchetarianists, as you call them, could also claim that their theory meets (b), but I’d argue it fails to meet (c).
The problem with your moral theory, as I see it, is that it also fails to meet (c), because there could be many plausible, but horrific in my view, arguments you could make: e.g. that eugenics would improve the species’ odds of survival, as would assigning jobs to people based on how good they would be at them vs letting them choose for themselves &c.
I was expecting this response either from you or someone else, but didn’t want to make my previous comment too long (a habit of mine) by preempting it. It’s a totally valid next question, and I’ve considered it before.
Criterion (c) is that the principles of my moral system must not lead when taken to their logical extent to a society that I, the proponent of the system, would consider dystopian. The crux of my counter-argument is that most of what you’d consider horrific, I would also probably consider horrific, as would most people—and humans don’t do well in societies that horrify them. Taking any path that leads to a “dystopia” is inconsistent with the goal.
(I’m trying to prevent this comment from turning into a prohibitively massive essay so I’ll try to restrain myself and keep this broad—please feel free to request further detail about anything I say.)
Eugenics, first of all, doesn’t work. (I take you to mean “negative eugenics”—killing or sterilizing those you consider undesirable, rather than “positive eugenics”—tinkering with DNA to produce kids with traits you find desirable, which hasn’t really been tried and only very recently became a real possibility to consider.) We suck at guessing whether a given individual’s progeny will be “good humans” or not. Too many factors, too many ways a human can be valuable, and even then all you have is a baby with a good genetic start in life—there’s still all the “nurture” to come. It’s like herding cats with a blindfold on. I could go on for pages about all the ways negative eugenics doesn’t work—but say we were capable of making useful judgments about which humans would produce “bad” offspring. You’d then have to make the case that the principle “negative eugenics is fine to do” furthers the goal (helping humanity to survive) to such an extent that it outweighs the necessary hits taken by other goal-furthering principles like “don’t murder people”, “don’t maim people”, “don’t give too much power to too few people” and, on an even more basic level, “don’t suppress empathy”.
Do you and I consider negative eugenics “horrific” because we think we (or at least our genitals) would be on the chopping block? Probably not, though we might fear it a bit. It horrifies us because we feel empathy for those who would suffer it. Empathy is hard-wired in most people. Measure your brain activity while you watch me getting hit with a hammer and your pain centers will show activity. You can feel for me (though measurably less if we’re not the same race—these brains evolved in little tribes and are playing catch-up with the very recent states of national and global inter-dependence). Giving weight—a lot of weight—to principles protective or supportive of empathy is consistent with the goal because empathy helps us survive as a species. Numb or suppress it too much and we’re screwed. Run counter to it too much without successfully suppressing it and you’ve got a society full of horrified, outraged people. Not great for social co-operation.
Which brings me to your other example, assigning jobs based on ability without regard to choice. Again, won’t work. Gives you a society full of miserable resentful people who don’t give their forced-jobs the full passion or creativity of which they are capable, or actively direct their energies towards trying to get re-assigned to the job they want. Would go further into this but this is already too long!
I know those two were only examples on your part but my point is that the question “does this help humanity to survive” is always a case of trying to balance “does it help in this way to an extent that outweighs how it harms in these other ways”. That has to be taken into account when considering a “horrible scenario”. People having empathy—caring for and helping each other—helps us to survive. People being physically and mentally healthy (“happy” is a big part of both, by the way) helps. People having personal freedom to create and invent and try things helps. People being ambitious and competing and seeking to become better helps. We need principles that take all that value into account—and sometimes those principles are going to be up against each other and we have to look for the least-worst answer. It’s never simple, we get it wrong all the time, but we must deal with it. If morality was easy we wouldn’t have spent the last ten thousand years arguing about it.
Now, I noticed that elsewhere you said it was bothering you that people were going off on tangents to your main issues, so I’ll try to circle back to your original point. You’re trying to devise a framework for evaluating a moral system, and I do think your criteria raise some useful lines of inquiry, but I don’t see how it’s possible to “evaluate” something without expressing or defining what it is you want it to do. My evaluation of my hairdryer depends totally on whether I want it to dry my hair or tell me amusing anecdotes. Evaluation comes up “pretty good” on the former and “totally crap” on the latter. Now “figuring out a way to evaluate a moral system” is something I’m all for and the best help I have to give with that is to suggest that you define what it is you want a moral system to do first—a base on which build your evaluation framework.
[Edited to add: I got through two paragraphs on eugenics without bringing up the you-know-whozis! Where should I pick up my medal?]
By the way, what exactly do you mean by “arbitrary” and “non-arbitrary”? I am asking because the homo sapiens species itself is in some sense “arbitrary”—do we want the result to be equally attractive for humans and space spiders, or is it okay if it is merely attractive for humans?
My opinions is that while it would be a nice coincidence if my moral system would also happen to be attractive for the space spiders, I actually care about humans. Not even all humans equally, for example I wouldn’t care too much if e.g. psychopaths would decide that my moral system seems too arbitrary for them.
But then, considering the space spiders (and human spiders), there seem to be two different questions here: how would I define “morality” for agents more or less like myself, and how would I define “optimal rules for peaceful coexistence” for agents completely unlike myself.
I refuse to use the word “morality” for the latter, because depending on the nature of the space spiders, the optimal outcome could still be something quite horrifying for the average human. But in some sense it would be less “arbitrary”.